Saturday, December 22, 2012

Artistic Language

It happens pretty rarely in one’s career, but perhaps once or twice you will find yourself in the unenviable position of selecting a development language for yourself and your cohorts. Nowadays most shops use C# or Java, but back in the day when our choices were Cobol, Basic, or Fortran, I had the pleasure of making this dubious analysis. And no doubt as hardware and languages progress we’ll reach another tipping point where you may find yourself selecting between a panoply of competing next generation languages, so I share this in the spirit of support for when you need to cross that bridge.

Choosing a new language to use is as distressing as having to select a new city for relocating. You can research the weather, travel blogs, crime statistics, and a hundred other metrics, but you only fully appreciate the flavor for the place after you’ve lived there a couple years.

Languages bring to the table a variety of capabilities. Part of the challenge in making a selection is that you need to be fairly conversant in the languages you are reviewing so that you can both define the nature of the capabilities and then rate the features of the languages across those dimensions. Metrics I have used in the past include:

+ complexity of math library
+ readability of code
+ flexible variable typing
+ multithreading support
+ security framework
+ development environment tools
+ trace and debugging support
+ compilation speed and complexity
+ runtime distributable
+ execution speed
+ step-through execution
+ data entry validation masks
+ extensive sort support
+ integrated dictionaries
+ versioning
+ vendor commits to backward compatibility

Yeah it's a lot to ask for from a language vendor, but then once you select them you will be making a major commitment and effort in your life while using this tool. Perhaps it is precisely due to the difficulty of this decision that developers become pundits. If you wish to be continually marketable though, it's smarter to stay flexible and multilingual. A software language is a sophisticated tool: respect its art but be cosmopolitan.

Saturday, November 24, 2012

Artistic Non

The non-functional requirements in software are rather like the things that we shop for when we go buy a car. We rather assume that it comes equipped with wheels, an engine, and brakes… these are functional requirements. Beyond that though we are looking for more: do we like the color, how it feels on the road, its gas mileage, its styling, whether to lease or to buy.

In substance the main non-functional requirements pretty much remain the same between large software projects:

- the software has to last ten years across a changing variety of display devices;
- it has to remain maintainable and enhanceable without it turning into spaghetti code;
- it should fail gently and has to recover fully;
- it needs a consistent interface that is intuitively easy to use.

Non-functional specs deal less with the “what” and more with the “how”. Hence the responsibility for defining their implementation tends to fall rather less upon the business analysts and more upon the IT management team.

- The philosophical battle over the non…

Just as folks can have finely honed sensibilities about their automobiles, a manager can argue the non-functional requirements from quite different perspectives depending on their personal philosophy.

Swaying the choice of styling is how the manager regards acts of creative destruction. If his experience suggests that leading edge regrowth ultimately compensates for the havoc caused by destroying legacy systems or jobs then he may bend toward less maintainability, static display devices, and higher levels of ease of use. He is focused more on the shorter term.

Some managers relish having a variety of platforms and tools yet others prefer a restrictive “vanilla” operation. Those tending toward variety may favor looser interface standards, but they may prefer a better corroborated sense of maintainability. They are focused more on organic growth.

Managers tend to sanction different turnover velocities, both with respect to employees and subsystems. It’s lease or buy in a different context. Those that favor a faster turnover may care less about maintainability but will tend toward common interface components. They are focused on punctuated evolution.

Finally some managers have a high aversion to risk. These will pay much greater attention to failure and recovery modes and request more thorough testing across the ever growing panoply of display devices. They are focused on safety. Generally then the vectors pointing the politics of “non” push along the four completely independent dimensions of creative destruction, acceptance of variety, turnover speed, and risk aversion.

What kind of car does your manager drive?

Tuesday, October 9, 2012

Artful Stress

How much can you pump into that water balloon before it pops? Wouldn’t you like to know this before you get into the water balloon fight? Stress testing software is both an art and a science. Some of the problems you are trying to ferret out are sublime and intermittent and may depend upon environmental loads that are outside of your direct control. To run a fully representative stress test therefore you may need to wait until your full production environment is up and running.

At the same time though, if you delay this crucial piece of quality control until late in the development cycle then you risk being so locked in to an architecture that you are prevented from making effective changes.

To avoid such lock-in I like running simulated stress tests as early as possible. Once you have the physical database designs and user interface for updating the data, simulate the expected load and see where locks and waits occur. If you rely on outside web services then load them up with a few times your expected average use to see how they respond when under peak loading.

Maybe consider extrapolating from some smaller balloons before you fill up your real soaker, eh? Complicate the process by running large concurrent batch jobs on the same server or run a virus scan, a backup, or a defragmentation utility.

Another effective way to avoid scalability issues is to restrict developers to an undersized server, limited in speed and memory. Be aware however that although this is a fair test of design efficiency it's not really a fair test of deadlock avoidance or failover cascades. Often these have to be directly simulated with deliberately bugged code to make sure a future developer doesn't inadvertently pull the whole system down.

Artfully stress testing your system early will make you sleep stress free once it is in production. And keep you from getting soaked in hot water later.

Wednesday, September 5, 2012

Artful Sociology

Earlier I mentioned the three branches of a full-dress analysis. Of these the Sociological Analysis can often be the most challenging. Done right a full dress analysis is like a sublime pancake and sausage breakfast. The first course though, the Sociological analysis, is like that very first batch of pancakes you pour that ends up slightly brown in the middle and rubbery and white around the edges. Although eventually it becomes a throw-away (completely subservient to the Strategic and Object-oriented Analyses) you still have to perform it.

The Sociological analysis often presents dichotomies that are difficult to rectify. The intent of such an analysis might be to direct some aspects of work toward (and away from) specific employees; hence the analysis can be highly politicized. This argues for having this phase of analysis performed by a consultant outside of your regular organization.

The flip side of this argument however is that it can take several years in a large organization to become familiar with the strengths, weaknesses, approaches, and long-term potential of each employee. This argues more for assigning this analytical task to an existing employee who demonstrates sensitivity to these matters. Regardless I would say to treat your sociological analysis as if it were a beautiful rose on a bush you know you will certainly prune way back in the springtime.

How does one go about performing a sociological analysis -- what should such an analysis include?

* A map of the work flow: what gets moved to where, by who, and under what conditions.

* A review of how working hours mesh with constraints on commuting and telecommuting.

* Constraints imposed by contractual arrangements.

* The Management span of control and interest.

* Social cliques and off-hours affiliations.

* Accepted and customary levels of documentation.

* Spheres of influence building.

* Salary discrepancies.

* Communication methods up and down the line.

* Openness or resistance to criticism, and quality improvement effectiveness.

* Interaction of the company with the community.

* Discrepancies between ideas and actions.

* Flexibility, rigidity, and degree of being organized.

Yes this seems like a mumbo-jumbo of collective traits that minister an exposition yet fall short of any development guidance; they don’t point to what the designers necessarily should do next. Still, since they can help set the tone for ongoing design work (and can raise alarms to potential potholes along the way) such an analysis is certainly beneficial and can be a major contributor to the lasting success of a large and complicated mission-critical system.

Tuesday, August 21, 2012

Artful Recovery

Error handling is not rocket science and yet most developers get this so far wrong as to be embarrassing. The real purpose of error handling is threefold: to limit the impact of unanticipated interruption on processing overall, to provide the user with a graceful recovery so they may continue work, and to allow a programmer to determine and correct the underlying cause of the problem.

To accomplish this in modern object-oriented languages you need to bubble and persist your errors. Exceptions can happen anytime and nearly everywhere: the network could go down; a SQL log file can run out of space; a server can crash. Knowing this you should never assume that your good intentions (say to update a data record) will proceed along unperturbed.

“Try-catch” all data access routines as well as any methods you are calling from another class or library. You should think ahead to include code in your catching clause to resolve those situations you have initiated in your own class. Then depending on what invokes your class, you should either rethrow the entire exception or persist the exception so that you can retrieve it later.

Of course in your outermost class you need to decide how to notify the user of the exception. If your application is all client-side then create an error notification form that supplies both a simple-language description of the error along with a “details button” that provides the whole technical error code and stack dump, including the dead module’s version and session information.

The following pattern separates system-type errors (the query crashed) from errors in business logic. This is a perfectly reasonable way to handle errors, as system level errors might indicate a more severe situation that may merit immediate technical assistance, whereas business errors may simply be a missing or mistyped parameter to a function call.

Two key elements create this approach. The first is that you should have all your modules that perform core business-logic implement a commonly defined custom Error object. Here is an example of one I use:

public struct MyErrorObject
public string ErrAgentName;
public string ErrAgentVsn;
public string ErrDescription;
public int ErrSeverity;
public void SetErrObject(string ErrorReason, string WarnOrFatal)
ThisError = new MyErrorObject();
ThisError.ErrAgentName = Assembly.GetCallingAssembly().CodeBase;
ThisError.ErrAgentVsn = Assembly.GetCallingAssembly().Get- Name().Version.ToString();
ThisError.ErrDescription = ErrorReason;
if (WarnOrFatal == “Fatal”)
{ThisError.ErrSeverity = 2;}
{ThisError.ErrSeverity = 1;}

Notice that I define a severity level that allows you, in the calling parent, to decide if you may continue processing with just a warning. Also notice that the SetErrObject logic sets version information using reflection to allow for more detailed debugging by tech support.

Secondly, methods in classes that implement this object should return a boolean to indicate success or failure: a typical invocation then would look like:

public bool ValidateParameters()
if (userid.Length < 3)
string invalidUser = “User ID ” + userid + ” is Invalid”;
this.SetErrObject(invalidUser, “Fatal”);
return false;
// … more validation stuff follows

And the outer call to run the whole thing would look like:

{if (!ValidateParameters())
{if (ThisError.ErrSeverity > 1)
{return false;}
{throw new Exception(“ValidateParameters Failed!”);}

Notice that we still try-catch the method and bubble up errors along both pathways: any exception from the called method gets thrown upward in the catch clause, and if the error was a business logic failure we return “false” to the user interface.

If your application is running on a server then have the top module send appropriate email notification to tech support. If your app is a stateless service then you should persist the error to a database (along with the session key naturally); furthermore you should provide an additional method to retrieve and send this info to the client side on a subsequent request.

When your application hits an error it is like a sick child that needs some loving attention. The difference between just bailing with a message-box and providing fully nuanced error-handling is like the difference between giving the sick kid some consommé or giving him a nice hot bowl of chunky chicken noodle soup and calling the doctor. Take the time to handle your errors with the same care you would dedicate to your own sick child.

Sunday, July 22, 2012

Artful Giving

Many useful things in our modern world are “free”: in software development you especially rely on a panoply of information (particularly when grappling your way out of a bind) pulled straight from the Internet. For “free”.

When you are a growing child your parents sacrifice their time for your benefit, to increase your skills. Society expects the same from you when you age: folks wish that you contribute as much as you have received.

Code snippets, online help, device drivers, shareware. In the end you only gain the equivalent amount of value from the public commons as you yourself have contributed. This manifests itself through subtle activities that reveal their knowledge to you by their methodology and yet that methodology won’t instantiate until you create your awareness upon it through researching your own contributions.

Help your future self therefore by paying it forward: give back to the professional world the same as you would your own children.

Tuesday, June 5, 2012

Artful Deprecation

In a couple earlier posts I discussed three alternatives for handling changes to business rules in an object-oriented development environment. Now we come to the last approach, deprecation. We use deprecation when we want to "take back" a previously distributed method; we want to completely replace its implementation with something new and different.

Rather than reusing the same method name, we create (within the same class) an entirely new method with a new name. Then in meta-compiler statements we deprecate the old method (we prepend the "Obsolete" directive) so that if a developer attempts to use it they will receive a pop-up that recommends that they use the new method name instead. Unlike an overload, it may not be possible for to modify the old method to call the new method with nulls in the new arguments. Do it however if a safe way can be found to "translate" between the two methods.

The only drawback with deprecation is that once a team becomes accustomed to certain common method-names it becomes a hassle to have to relearn them. Usually when you deprecate a method you will make an extra effort to communicate this change to the development staff with a global notice; you may even make the effort to search the source code library for any routines that used the old method and then schedule some working time to update them.

In summary then, handle change by overloading, inheriting, propertizing, and deprecating, but use the approach that is appropriate for your situation.

Saturday, May 12, 2012

Artful Culture

In some development efforts, our goal is to deliver software that is "fun." In other words the purpose of the software is to Entertain. In most workplace environments however the goal is to deliver software that helps people get stuff done; in other words the software has Utility.

In actual cultural practice however, most developers strive to make their software a little bit of each: you want to make your software "fun enough" that folks don't get bored out of their skulls when using it while they are getting stuff done. In fact, you generally want to hit a certain sweet spot such that in addition to its Utility, what you produce is:

Culturally Appropriate, Fun, Inexpensive, Courteous, Accurate, Responsive

Unfortunately these "cultural" aspects of development are seldom if ever captured in an Analyst's "Requirements" document. They also fall a bit outside the realm of what IT management might normally consider as platform based "non- functional" requirements.

How does this culture get baked into the software then? Well it usually takes culturally aware developers.

Monday, April 16, 2012

Artistic Utilities

One item designers consistently overlook during the redesign of a legacy system is capturing the hundreds of small utility programs in use to perform the mundane and necessary dirty work: one shot fixes, bug patches, cyclical clean up, ad hoc requests, et cetera. I suppose the consensus might be that the old utilities should be irrelevant for the new design.

Just the fact they exist however shines a light into the dark corners of workarounds and the prolific ability to bend to the flexibility of user demands and their processing errors. When redesigning a system therefore, at least pay attention to the general circumstances and conditions where utilities get invoked, and plan to support a similar panoply of solutions in the new environment.

Tuesday, March 13, 2012

Artistic Cycles

Some software components have nothing to do with business requirements per se, but are rather reactionary coding after the fact to handle the daily, monthly, and seasonal cycles of production, or to accommodate the processing of one item at certain times against others. These timing interactions can rarely be foreseen during the design stage of a project.

Hence, make sure all processes can be safely paused or scheduled. Make table entries when a process passes though various stages, so other processes may be aware of its status. Plan for processes to be stopped, rolled back, and then restarted. Finally, allow for priorities so that specific components of work can move through production at different speeds.

A design artist knows it rather makes good sense to allow leeway for controls of production cycles even without soliciting their requirement.

Thursday, February 16, 2012

An Artistic API

Application Programming Interfaces, or APIs as they are commonly called, present interesting design challenges, both from the perspectives of a producer and from the point of view of the chain of consumers.

The immediate consumer is a developer who wants to use a certain service (say for example a geography API). His concern is that he gets consistent results returned in a structure that is both easy to parse, and robust enough to handle a variety of error conditions. The second consumer is the business analyst, who would like any API's that the staff use provide adequate documentation and that subsequent upgrades to the API avoid deprecating functions. The third consumer is a development manager, who has concerns that API use remains in a controlled enough environment that he can hand off development to future staff, as well as avoid vendor lock-in.

The challenge therefore is how to offer powerful APIs that aren't fragile. Some of the design variability involved revolves around the choice of granularity: do you just provide a couple of methods with dozens of properties, or do you provide a dozen methods each having two or three properties?

Ah, so here comes the magic. You see, the consumer who is the developer prefers a small quantity of functions with numerous parameters: once he actually makes the effort to learn them, they are easier for him to remember and the fragility affords him job security. The analyst prefers a middle course, a dozen functions each with four or five properties, as the documentary chunks are more easily absorbed. The manager prefers a hundred functions with a couple properties, as he can develop separate human resources to digest subsets of the entire functionality. So think now: as the developer, who is actually the "buyer" of your API?

Wednesday, January 11, 2012

An Artistic Haircut

In the quest to stay ever current of technology, companies constantly face the challenge of deciding whether to further complicate old, existing systems, or whether to scrap the old technology entirely for something new. On first inspection it is often difficult to appreciate the extent of effort that has been invested in a legacy system. Whereas on the surface the risk here would seem to be a straightforward analysis of uncertainty versus potential reward, the deeper diagnosis actually revolves around customer service issues.

Most likely a high percentage of your existing customers use only a small contained core section of your legacy system. You want to target the 70 percent or so of these customers who you can move to the new system with the least amount of risk. Realistically however you should prepare both the management and you staff that the process will essentially give your revenue a 30% haircut: it is simply unprofitable to convert everyone away from the legacy system, as after the new system is close to code-complete it becomes too costly (primarily for your customer support and technical support staffs) to maintain the other 30% on the legacy.