Saturday, December 22, 2012

Artistic Language


It happens pretty rarely in one’s career, but perhaps once or twice you will find yourself in the unenviable position of selecting a development language for yourself and your cohorts. Nowadays most shops use C# or Java, but back in the day when our choices were Cobol, Basic, or Fortran, I had the pleasure of making this dubious analysis. And no doubt as hardware and languages progress we’ll reach another tipping point where you may find yourself selecting between a panoply of competing next generation languages, so I share this in the spirit of support for when you need to cross that bridge.

Choosing a new language to use is as distressing as having to select a new city for relocating. You can research the weather, travel blogs, crime statistics, and a hundred other metrics, but you only fully appreciate the flavor for the place after you’ve lived there a couple years.

Languages bring to the table a variety of capabilities. Part of the challenge in making a selection is that you need to be fairly conversant in the languages you are reviewing so that you can both define the nature of the capabilities and then rate the features of the languages across those dimensions. Metrics I have used in the past include:

+ complexity of math library
+ readability of code
+ flexible variable typing
+ multithreading support
+ security framework
+ development environment tools
+ trace and debugging support
+ compilation speed and complexity
+ runtime distributable
+ execution speed
+ step-through execution
+ data entry validation masks
+ extensive sort support
+ integrated dictionaries
+ versioning
+ vendor commits to backward compatibility

Yeah it's a lot to ask for from a language vendor, but then once you select them you will be making a major commitment and effort in your life while using this tool. Perhaps it is precisely due to the difficulty of this decision that developers become pundits. If you wish to be continually marketable though, it's smarter to stay flexible and multilingual. A software language is a sophisticated tool: respect its art but be cosmopolitan.


Friday, December 7, 2012

The Art in Modularity


Good software developers tend to be organizing freaks: everything goes in a tiny little box in its proper place. You know the saying “there’s no limit to how complicated things can get, on account of one thing always leading to another.” This is especially true in programming.

To combat this creeping complexity we strive for modularity and maintainability. A small module that encloses a specific business rule is easier for testing and allows for greater flexibility of reuse. When modules become too small however, the sheer act of tracking them, organizing their use, maintaining consistent versions, and finding where a business rule gets implemented interferes with your productivity.

Somewhat counter-intuitively smaller modules also need greater external documentation, mainly to track the nature of the parameters passed between them.

What’s the right size for a module then? And how many modules should a system have? Although I like a module to be between a half and five printed pages it really depends upon two things: how many people are maintaining the software and the density of the development language.

The fewer the number of developers the larger modules can be. Each of you is baking your own cake. So if it’s just the two of you maintaining a legacy billing system written in an old dialect of Basic (that gets about as complicated as a GOSUB and a CALL) then sure, go with the 5-page modules.

But if a half-dozen of you are planning to maintain ASP web pages in C# (with overloads and inheritance) then you’d better veer closer to the halfpage module size. In that case you’re not each baking a cake; you’re all contributing to building a car. You need to be able to swap in new replacement parts when the old ones wear thin. Every once in a while step back to review how you are developing the code: are you using appropriately sized modules?


Saturday, November 24, 2012

Artistic Non


The non-functional requirements in software are rather like the things that we shop for when we go buy a car. We rather assume that it comes equipped with wheels, an engine, and brakes… these are functional requirements. Beyond that though we are looking for more: do we like the color, how it feels on the road, its gas mileage, its styling, whether to lease or to buy.

In substance the main non-functional requirements pretty much remain the same between large software projects:

- the software has to last ten years across a changing variety of display devices;
- it has to remain maintainable and enhanceable without it turning into spaghetti code;
- it should fail gently and has to recover fully;
- it needs a consistent interface that is intuitively easy to use.

Non-functional specs deal less with the “what” and more with the “how”. Hence the responsibility for defining their implementation tends to fall rather less upon the business analysts and more upon the IT management team.

- The philosophical battle over the non…

Just as folks can have finely honed sensibilities about their automobiles, a manager can argue the non-functional requirements from quite different perspectives depending on their personal philosophy.

Swaying the choice of styling is how the manager regards acts of creative destruction. If his experience suggests that leading edge regrowth ultimately compensates for the havoc caused by destroying legacy systems or jobs then he may bend toward less maintainability, static display devices, and higher levels of ease of use. He is focused more on the shorter term.

Some managers relish having a variety of platforms and tools yet others prefer a restrictive “vanilla” operation. Those tending toward variety may favor looser interface standards, but they may prefer a better corroborated sense of maintainability. They are focused more on organic growth.

Managers tend to sanction different turnover velocities, both with respect to employees and subsystems. It’s lease or buy in a different context. Those that favor a faster turnover may care less about maintainability but will tend toward common interface components. They are focused on punctuated evolution.

Finally some managers have a high aversion to risk. These will pay much greater attention to failure and recovery modes and request more thorough testing across the ever growing panoply of display devices. They are focused on safety. Generally then the vectors pointing the politics of “non” push along the four completely independent dimensions of creative destruction, acceptance of variety, turnover speed, and risk aversion.

What kind of car does your manager drive?


Wednesday, November 7, 2012

Artful Tool Use


A work acquaintance once posited “if we have the stupid people develop it, then it will be easier to maintain. If the smart people develop it then only the smart people will be able to maintain it.”

One of the classic management dilemmas remains how to select the creative workforce for a long-term project (or in software a neverending project) such that the system will not only stay maintainable but that the developers can accomplish the project in a subtle and efficient manner. It’s rather a deep question.

Your philosophy towards tool use and the Development Lifecycle may precipitate how you answer this question correctly in your surroundings. Imagine a chart with company strategy on one axis and expected system longevity on the other axis. Then we should ideally expect the mix of tool use and the level of employee expertise to look something like this:


Legend: HT = high tool, ME = medium expertise

At first glance this chart makes little sense: the ranges of expertise and tool use jump about and seem to lack any expected sort of gradient. This is attributable to the two dominions for when tools make a good bet: first when getting a system out fast has high strategic value, and secondly when time or money are likely to constrain your resources.

Similarly two situations require a higher level of expertise: when a system has a very specific focused utility and when the company needs the software to survive an expected future path of high business volatility.

Choose your tools and your creative workforce to match your company’s strategic plan. This is a key goal of I.T. strategy alignment and helps to avoid the dreaded swamp monster of impedance mismatch.


Wednesday, October 24, 2012

The Art in Empathy


The continual empathy of your clients is a key component of analysis. A separate entity from your sponsor, the clients are the people who will directly wield your software. Sometimes these folks overlap with the people who paid for the development and sometimes they don’t. Regardless you would do well to heed your clients’ desires as in the long run the clients keep you employed, rather than your sponsor.

This may strike a slightly odd chord, but it is akin to pointing out that you are more accountable for keeping your car running than the bank that lent you the money. All the bank cares about is that their loan is paid back with interest. But you will be the person driving your car and how you care for it and drive it determines how long the car will last. Similarly how well your project meets the needs of your clients will determine how long your software stays useful, even though your sponsor is providing the financing.

Amazingly enough the client enjoys relating their desires to someone who expresses a genuine interest in their needs and their working environment. You should try to discover what keeps the client awake at night: what are his worries. Nothing pleases a client like a designer who knows his workday life in detail, his daily routines, and what he faces in the way of competition and challenges in his office.

Analyze the big picture, not just the snapshot relevant to what you perceive as the scope of your project. Anticipate what the client wants: don’t wait for them to ask. Finally in those instances where you’ve identified a can of worms without a ready solution, consider what alternatives are available to monitor, mitigate, or manage the issue.

Interviewing for requirements seems simple enough, yet actually listening to a client discuss his desires takes a considerable amount of insight. You need to understand the sociology of his office and the things he wishes he had but doesn’t know how to ask for. Many times an employee is shy to relate his frustrations after finding that doing so in the past only set him apart as a complainer. You need to commiserate with him (without being smarmy or condescending) and then step back and independently think of ways to improve his working life.

Design extends beyond its up-front activity: once you’ve installed the initial version of your software you need to revisit the client to assess the full impact of your creation on his daily activities. The artistic thing to do is to create a system that your client appreciates long after you are gone. Continued empathy is the key for doing so.


Tuesday, October 9, 2012

Artful Stress


How much can you pump into that water balloon before it pops? Wouldn’t you like to know this before you get into the water balloon fight? Stress testing software is both an art and a science. Some of the problems you are trying to ferret out are sublime and intermittent and may depend upon environmental loads that are outside of your direct control. To run a fully representative stress test therefore you may need to wait until your full production environment is up and running.

At the same time though, if you delay this crucial piece of quality control until late in the development cycle then you risk being so locked in to an architecture that you are prevented from making effective changes.

To avoid such lock-in I like running simulated stress tests as early as possible. Once you have the physical database designs and user interface for updating the data, simulate the expected load and see where locks and waits occur. If you rely on outside web services then load them up with a few times your expected average use to see how they respond when under peak loading.

Maybe consider extrapolating from some smaller balloons before you fill up your real soaker, eh? Complicate the process by running large concurrent batch jobs on the same server or run a virus scan, a backup, or a defragmentation utility.

Another effective way to avoid scalability issues is to restrict developers to an undersized server, limited in speed and memory. Be aware however that although this is a fair test of design efficiency it's not really a fair test of deadlock avoidance or failover cascades. Often these have to be directly simulated with deliberately bugged code to make sure a future developer doesn't inadvertently pull the whole system down.

Artfully stress testing your system early will make you sleep stress free once it is in production. And keep you from getting soaked in hot water later.


Monday, September 24, 2012

Artful Hacks


Some I.T. personnel are as far apart as the East is from the West. Three metrics separate the developers from the management:

+ Easy to use
+ Fast
+ Proprietary

You see from management's perspective, software and file formats should be fast, easy to use, and universally available. In this way the company avoids being locked in to knowledge experts or particular vendors.

The developer however actually prefers the opposite: software and file formats that are cumbersome, resource intensive, and proprietary. In this way he can become an expert and increase his value to his employer.

So with both sides pulling in opposite directions what actually gets developed, what can possibly morph into physical being? Somewhere amidst the massive amounts of energy used in the politics and positioning of the two sides something springs up in-between.

Irrespective of the methodology employed or the layers of middle management helping to assuage differences, software developed in a corporate environment ends up partially complicated in the more obscure parts, just fast enough to get the job done, and reliant upon one or two inside hacks. It can be no other way.


Wednesday, September 5, 2012

Artful Sociology


Earlier I mentioned the three branches of a full-dress analysis. Of these the Sociological Analysis can often be the most challenging. Done right a full dress analysis is like a sublime pancake and sausage breakfast. The first course though, the Sociological analysis, is like that very first batch of pancakes you pour that ends up slightly brown in the middle and rubbery and white around the edges. Although eventually it becomes a throw-away (completely subservient to the Strategic and Object-oriented Analyses) you still have to perform it.

The Sociological analysis often presents dichotomies that are difficult to rectify. The intent of such an analysis might be to direct some aspects of work toward (and away from) specific employees; hence the analysis can be highly politicized. This argues for having this phase of analysis performed by a consultant outside of your regular organization.

The flip side of this argument however is that it can take several years in a large organization to become familiar with the strengths, weaknesses, approaches, and long-term potential of each employee. This argues more for assigning this analytical task to an existing employee who demonstrates sensitivity to these matters. Regardless I would say to treat your sociological analysis as if it were a beautiful rose on a bush you know you will certainly prune way back in the springtime.

How does one go about performing a sociological analysis -- what should such an analysis include?

* A map of the work flow: what gets moved to where, by who, and under what conditions.

* A review of how working hours mesh with constraints on commuting and telecommuting.

* Constraints imposed by contractual arrangements.

* The Management span of control and interest.

* Social cliques and off-hours affiliations.

* Accepted and customary levels of documentation.

* Spheres of influence building.

* Salary discrepancies.

* Communication methods up and down the line.

* Openness or resistance to criticism, and quality improvement effectiveness.

* Interaction of the company with the community.

* Discrepancies between ideas and actions.

* Flexibility, rigidity, and degree of being organized.

Yes this seems like a mumbo-jumbo of collective traits that minister an exposition yet fall short of any development guidance; they don’t point to what the designers necessarily should do next. Still, since they can help set the tone for ongoing design work (and can raise alarms to potential potholes along the way) such an analysis is certainly beneficial and can be a major contributor to the lasting success of a large and complicated mission-critical system.


Tuesday, August 21, 2012

Artful Recovery


Error handling is not rocket science and yet most developers get this so far wrong as to be embarrassing. The real purpose of error handling is threefold: to limit the impact of unanticipated interruption on processing overall, to provide the user with a graceful recovery so they may continue work, and to allow a programmer to determine and correct the underlying cause of the problem.

To accomplish this in modern object-oriented languages you need to bubble and persist your errors. Exceptions can happen anytime and nearly everywhere: the network could go down; a SQL log file can run out of space; a server can crash. Knowing this you should never assume that your good intentions (say to update a data record) will proceed along unperturbed.

“Try-catch” all data access routines as well as any methods you are calling from another class or library. You should think ahead to include code in your catching clause to resolve those situations you have initiated in your own class. Then depending on what invokes your class, you should either rethrow the entire exception or persist the exception so that you can retrieve it later.

Of course in your outermost class you need to decide how to notify the user of the exception. If your application is all client-side then create an error notification form that supplies both a simple-language description of the error along with a “details button” that provides the whole technical error code and stack dump, including the dead module’s version and session information.

The following pattern separates system-type errors (the query crashed) from errors in business logic. This is a perfectly reasonable way to handle errors, as system level errors might indicate a more severe situation that may merit immediate technical assistance, whereas business errors may simply be a missing or mistyped parameter to a function call.

Two key elements create this approach. The first is that you should have all your modules that perform core business-logic implement a commonly defined custom Error object. Here is an example of one I use:

public struct MyErrorObject
{
public string ErrAgentName;
public string ErrAgentVsn;
public string ErrDescription;
public int ErrSeverity;
}
public void SetErrObject(string ErrorReason, string WarnOrFatal)
{
ThisError = new MyErrorObject();
ThisError.ErrAgentName = Assembly.GetCallingAssembly().CodeBase;
ThisError.ErrAgentVsn = Assembly.GetCallingAssembly().Get- Name().Version.ToString();
ThisError.ErrDescription = ErrorReason;
if (WarnOrFatal == “Fatal”)
{ThisError.ErrSeverity = 2;}
else
{ThisError.ErrSeverity = 1;}
}

Notice that I define a severity level that allows you, in the calling parent, to decide if you may continue processing with just a warning. Also notice that the SetErrObject logic sets version information using reflection to allow for more detailed debugging by tech support.

Secondly, methods in classes that implement this object should return a boolean to indicate success or failure: a typical invocation then would look like:

public bool ValidateParameters()
{
if (userid.Length < 3)
{
string invalidUser = “User ID ” + userid + ” is Invalid”;
this.SetErrObject(invalidUser, “Fatal”);
return false;
}
// … more validation stuff follows

And the outer call to run the whole thing would look like:

try
{if (!ValidateParameters())
{if (ThisError.ErrSeverity > 1)
{return false;}
}
catch
{throw new Exception(“ValidateParameters Failed!”);}

Notice that we still try-catch the method and bubble up errors along both pathways: any exception from the called method gets thrown upward in the catch clause, and if the error was a business logic failure we return “false” to the user interface.

If your application is running on a server then have the top module send appropriate email notification to tech support. If your app is a stateless service then you should persist the error to a database (along with the session key naturally); furthermore you should provide an additional method to retrieve and send this info to the client side on a subsequent request.

When your application hits an error it is like a sick child that needs some loving attention. The difference between just bailing with a message-box and providing fully nuanced error-handling is like the difference between giving the sick kid some consommé or giving him a nice hot bowl of chunky chicken noodle soup and calling the doctor. Take the time to handle your errors with the same care you would dedicate to your own sick child.


Tuesday, August 7, 2012

Artful Commitment


People arrive at the ocean of project management from various rivulets of experience. Some have navigated programming or quality assurance while others set out with a business management education. Regardless of how you get there, once you have responsibility for overseeing the movement of creative team processes you find yourself in the delicate position of asking people to do things.

To a certain extent folks enjoy being challenged and pushed. They dislike, however, being stressed and overworked. The trick is to find the sweet spot where you commit them to the fair amount of pressure. At the same time folks have their own personal quirks and either tend to over-commit themselves or alternatively play up the amount of time that they are being productive when they are actually socializing instead.

It’s sort of management science: if you don’t assign the work then who is going to do it? Obviously you want to show some efficiency in completing the workload, but you also want to maintain good morale. This engages several competing foci and proves to be a riveting point in project management.

And I mean riveting in the sense of steel beam construction: the rivets only go through the steel in strategic locations, and for good reason. A rivet in the wrong place — too close to other rivets or adjacent to a concentration of shear — will weaken rather than strengthen a structure.

It’s your job when you are managing a project to rivet the team together to make sure that all of the pieces, players, and components are carrying their load under equal stress.


Sunday, July 22, 2012

Artful Giving


Many useful things in our modern world are “free”: in software development you especially rely on a panoply of information (particularly when grappling your way out of a bind) pulled straight from the Internet. For “free”.

When you are a growing child your parents sacrifice their time for your benefit, to increase your skills. Society expects the same from you when you age: folks wish that you contribute as much as you have received.

Code snippets, online help, device drivers, shareware. In the end you only gain the equivalent amount of value from the public commons as you yourself have contributed. This manifests itself through subtle activities that reveal their knowledge to you by their methodology and yet that methodology won’t instantiate until you create your awareness upon it through researching your own contributions.

Help your future self therefore by paying it forward: give back to the professional world the same as you would your own children.


Monday, July 9, 2012

The Art in Algebra


Some aspects of software design feel very much like being mired in the depths of linear algebra. Once you solicit the requirements from the interested parties (when you think you know who is responsible for what) and when you establish the business rules that determine which user input you require for which circumstances, you end up with a hundred slips of paper that you somehow need to organize across three dimensions. Time to drag out the UECF matrix.

This is more of a mental process than an actual formula that delivers a specific solution. UECF stands for user-event component filter: it means organizing your system along the axes of roles (users), events (a customer places an order, an employee gets a raise), and components (what you conceptualize for the building blocks of your system). Or in object-oriented design we traditionally call these the use cases.

The essence of the problem is how to translate the use cases into various screen interface designs. This gets compounded not only by “who can set what” but also by the level of importance of each data field from the varying perspectives: we want important items at the top left on the screen, but what is important for one role may be superfluous for another. The usual answer is to wireframe a solution.

But this falls short of actually resolving the underlying algebra: how to organize screen elements as /reusable/ components. With wireframe modeling you end up with the blind men and an elephant analogy: each user describes the screen to appear correct but just from their limited exposure via their own responsibilities. The onus of standing up for a legitimate component view as it relates to the interface therefore falls squarely on the shoulders of the designer. You and only you are responsible for the design of the environment that will afford a proper care to the whole elephant.

Group logically reusable fields into a “tab” of a localized sub form or cordoned off area, where appropriate. Hey friend, the customer name, address, phone, and eMail are all a single common logical unit: group them together into a “control.” Of course the same is true for other groups of fields within your business. And create a matrix… do the algebra!


Tuesday, June 5, 2012

Artful Deprecation


In a couple earlier posts I discussed three alternatives for handling changes to business rules in an object-oriented development environment. Now we come to the last approach, deprecation. We use deprecation when we want to "take back" a previously distributed method; we want to completely replace its implementation with something new and different.

Rather than reusing the same method name, we create (within the same class) an entirely new method with a new name. Then in meta-compiler statements we deprecate the old method (we prepend the "Obsolete" directive) so that if a developer attempts to use it they will receive a pop-up that recommends that they use the new method name instead. Unlike an overload, it may not be possible for to modify the old method to call the new method with nulls in the new arguments. Do it however if a safe way can be found to "translate" between the two methods.

The only drawback with deprecation is that once a team becomes accustomed to certain common method-names it becomes a hassle to have to relearn them. Usually when you deprecate a method you will make an extra effort to communicate this change to the development staff with a global notice; you may even make the effort to search the source code library for any routines that used the old method and then schedule some working time to update them.

In summary then, handle change by overloading, inheriting, propertizing, and deprecating, but use the approach that is appropriate for your situation.


Saturday, May 12, 2012

Artful Culture


In some development efforts, our goal is to deliver software that is "fun." In other words the purpose of the software is to Entertain. In most workplace environments however the goal is to deliver software that helps people get stuff done; in other words the software has Utility.

In actual cultural practice however, most developers strive to make their software a little bit of each: you want to make your software "fun enough" that folks don't get bored out of their skulls when using it while they are getting stuff done. In fact, you generally want to hit a certain sweet spot such that in addition to its Utility, what you produce is:

Culturally Appropriate, Fun, Inexpensive, Courteous, Accurate, Responsive

Unfortunately these "cultural" aspects of development are seldom if ever captured in an Analyst's "Requirements" document. They also fall a bit outside the realm of what IT management might normally consider as platform based "non- functional" requirements.

How does this culture get baked into the software then? Well it usually takes culturally aware developers.

Monday, April 16, 2012

Artistic Utilities


One item designers consistently overlook during the redesign of a legacy system is capturing the hundreds of small utility programs in use to perform the mundane and necessary dirty work: one shot fixes, bug patches, cyclical clean up, ad hoc requests, et cetera. I suppose the consensus might be that the old utilities should be irrelevant for the new design.

Just the fact they exist however shines a light into the dark corners of workarounds and the prolific ability to bend to the flexibility of user demands and their processing errors. When redesigning a system therefore, at least pay attention to the general circumstances and conditions where utilities get invoked, and plan to support a similar panoply of solutions in the new environment.


Tuesday, March 13, 2012

Artistic Cycles


Some software components have nothing to do with business requirements per se, but are rather reactionary coding after the fact to handle the daily, monthly, and seasonal cycles of production, or to accommodate the processing of one item at certain times against others. These timing interactions can rarely be foreseen during the design stage of a project.

Hence, make sure all processes can be safely paused or scheduled. Make table entries when a process passes though various stages, so other processes may be aware of its status. Plan for processes to be stopped, rolled back, and then restarted. Finally, allow for priorities so that specific components of work can move through production at different speeds.

A design artist knows it rather makes good sense to allow leeway for controls of production cycles even without soliciting their requirement.


Thursday, February 16, 2012

An Artistic API


Application Programming Interfaces, or APIs as they are commonly called, present interesting design challenges, both from the perspectives of a producer and from the point of view of the chain of consumers.

The immediate consumer is a developer who wants to use a certain service (say for example a geography API). His concern is that he gets consistent results returned in a structure that is both easy to parse, and robust enough to handle a variety of error conditions. The second consumer is the business analyst, who would like any API's that the staff use provide adequate documentation and that subsequent upgrades to the API avoid deprecating functions. The third consumer is a development manager, who has concerns that API use remains in a controlled enough environment that he can hand off development to future staff, as well as avoid vendor lock-in.

The challenge therefore is how to offer powerful APIs that aren't fragile. Some of the design variability involved revolves around the choice of granularity: do you just provide a couple of methods with dozens of properties, or do you provide a dozen methods each having two or three properties?

Ah, so here comes the magic. You see, the consumer who is the developer prefers a small quantity of functions with numerous parameters: once he actually makes the effort to learn them, they are easier for him to remember and the fragility affords him job security. The analyst prefers a middle course, a dozen functions each with four or five properties, as the documentary chunks are more easily absorbed. The manager prefers a hundred functions with a couple properties, as he can develop separate human resources to digest subsets of the entire functionality. So think now: as the developer, who is actually the "buyer" of your API?


Wednesday, January 11, 2012

An Artistic Haircut


In the quest to stay ever current of technology, companies constantly face the challenge of deciding whether to further complicate old, existing systems, or whether to scrap the old technology entirely for something new. On first inspection it is often difficult to appreciate the extent of effort that has been invested in a legacy system. Whereas on the surface the risk here would seem to be a straightforward analysis of uncertainty versus potential reward, the deeper diagnosis actually revolves around customer service issues.

Most likely a high percentage of your existing customers use only a small contained core section of your legacy system. You want to target the 70 percent or so of these customers who you can move to the new system with the least amount of risk. Realistically however you should prepare both the management and you staff that the process will essentially give your revenue a 30% haircut: it is simply unprofitable to convert everyone away from the legacy system, as after the new system is close to code-complete it becomes too costly (primarily for your customer support and technical support staffs) to maintain the other 30% on the legacy.