Thursday, April 16, 2015

Artful Notes


It’s difficult to be a successful developer unless you keep prodigious notes. Notekeeping can be an art all its own: it’s really counterproductive at the point of code development if you find yourself with a huge pile of disorganized Post-Its of the use-cases.

While you are in the midst of designing software the useful information pours in across scenarios with a wide variety of formats. You may hold a meeting where everyone is talking. You may receive a PDF specification from a vendor. You may pick up the phone and chat with a colleague to answer some questions. You may shoot off an email that causes a cascade of responses. You may read something relevant on the internet at home. How can you possibly keep track of all this information in a sensible fashion?

Many of my developer friends like to keep spiral notebooks, but I’ve never been able to easily find what I’ve written previously. I prefer a mixed approach. I use a PIM (personal information manager) to gather and automatically categorize short sentence snippets of concise information. I keep folders organized on the network for documents. I also keep hardcopy in manila folders for documents that I think will have lasting importance. I keep a folder in Outlook for each gigantic-scale project. And I use Google Desktop to find my way amidst all of the detritus.

In meetings I take a single loose leaf of paper and then either transfer my notes afterward to my PIM or a more formal document to mail or file on the network. When I'm not at work I also *always* carry a Kindle for any spur-of-the-moment revelations.

Notekeeping is complicated: experiment with different methods and software to find something that works efficiently and correctly for your own style.

Wednesday, March 11, 2015

Artful Modality


In the many years that I've done software development I've noticed that systems written in-house tend toward two extremes. On one side you have useful software designed for a small quantity of people to do a very specific task, and on the other side you have somewhat clumsy software designed for two thirds of the company to support a wide variety of operations.

The smaller, task-specific software has tens and hundreds of flavors, yet it only survives three or four years until it gets replaced by a new incarnation. The multipurpose clunky software has just one flavor but seems to live for fifteen years, usually well beyond its prime. How come there is no middle ground?

Well just like in animation, sociology creates an "uncanny valley" of in-house software that doesn't comfortably exist. And this is due very much to the nature of people and the work that they do. In most jobs people tend to leverage certain knowledge and skills to support the company with specific tasks. A capable software designer can create very nifty systems that can maintain incredible complexity with the understanding that a moderately competent user will be trained on its use.

Such systems can be local successes even though they don't translate well to other users. And they don't survive for long because they don't incorporate the dynamics of both system-level and human-level changes. People change jobs and don't communicate all of the knowledge. Small systems fall into disuse because they are people and knowledge specific. They are still good for what they are good at: improving local productivity.

Large systems live past their prime for similar sociological reasons. The rate of change in people-skills overwhelms the complexity of linking processes. Linked processes become unlinked, and the training materials don't keep up with the changes in the software.

Large systems still serve a useful purpose however, to the extent that they organize people to work together. Where does the personality of a corporation really exist? In between the modes of small and large software.


Wednesday, February 11, 2015

Artistic Inheritance


In an earlier post I discussed overloading a method to handle changing it from a computation based on one measurement to having it be based on two. Another way to handle this is with inheritance: you create a new object, say "modernCar," that inherits from myCar and then implements the overload in its method. In this way developers using the old routine use myCar.getCurrentMiles, and developers using the new routine use modernCar.getCurrentMiles.

Using this approach makes more sense than a simple overload when the new way of invoking the method clearly applies to only a selected subset of the base class.

Yet another approach for handling change is to "propertize" the method call. Basically this means changing all of the arguments to the method into class properties instead. To be clear and consistent in this approach the method should place its result in a public property as well, so the sequence for using the new method becomes set miles1, date1, miles2, date2, invoke the method, and get currentMiles.

This approach has the distinct disadvantage that it causes a hard break to existing code. It also requires clearer technical documentation explaining its use. On the other hand this approach has the two distinct advantages that future properties (both set and get) can be added later without impacting the code base, and it also encourages future developers to actually review the target code before invoking the method.

So far then we've seen three approaches for managing change: overload, inheritance, and "propertizing". In a later post I'll discuss your fourth option, deprecation.