Friday, June 16, 2017

Artistic Test Harnesses


Modern object-oriented development is primarily cobbling small pieces of reusable functionality together. In the midst of development — during Spring training when you are hitting, running, throwing, and batting all day — you implement very specific business rules in small modules. Given certain input conditions and certain characteristics of the underlying data the invoker expects you to return the right properties or accomplish a persisted change in the data with a specified method.

Quite frequently the classes doing the heavy-lifting work behind the scenes without any visible interface to the user. How then do you make sure that your modules work properly? And how do you verify they work correctly under different conditions when called by different classes and when the underlying data changes? This job my friend calls for the use of a test harness.

A good test harness is like an automated batting cage where you can walk in, press a button for sixty mile per hour pitches, and have a machine throw consistent strikes.

Essentially a test harness is an ugly, quickly thrown together form with a handful of buttons and data-bound fields (or perhaps a data grid) that allows you full access to all of the ways possible to invoke your methods and set or get your public properties.

I like to use a tabbed interface when I’m building a harness; frequently I develop several objects simultaneously as part of a larger library and each tab tests the features of one of the objects. The complete form thus covers a large chunk of the library. Click a different tab to have a slower or faster pitch thrown toward you.

The best part about a harness, aside from the convenience it affords to unit testing, is that in the throes and craziness of implementation you can use it for quick and dirty fixes (invoking methods directly rather than bothering a fully implemented interface). Yes you can always take a shortcut and set debug checkpoints to carry out unit tests. But the extra flexibility and thoroughness afforded by a test harness, even though it requires an extra day to throw together, is always well worth the effort.


Wednesday, May 17, 2017

Artistic Options


On your browser (under the Tools menu) you will find an item that says "options." And when you are developing code for your employer using the latest IDE, say Visual Studio, you can naturally set the look-and-feel "options." So how is it that within the software you create for your coworkers you left out the Options settings?

Maybe personalization is of less importance when you have two hundred clients rather than two million, but I also believe that a large part of the failure to personalize is deliberate scope restrictions to begin with. In other words if only twenty percent of the users need to see "purchase history" then we segregate it off to a separate screen. It's as if we pull the oysters off the regular menu since only five patrons regularly order them and they know how to ask for them.

Occasionally during design you will discover however that several groups of people work on essentially the same subset of data, although you will hear preferences expressed in terms of "I'd rather see it this way." I'd rather have a pulldown list. I'd rather have all the ship-to data on the side.

Heck, even restaurants that have a tightly limited menu allow you the flexibility to build other victuals given their ingredients. Go to In-n-Out and you can get a 4 by 4 or Animal Style without the carhop even blinking an eye.

Sometimes screens can be cleaner (and easier to code) if you design from the start with the idea of user-selected optional components or views. You will please more folks if you allow for personalization: expand their options!


Tuesday, April 18, 2017

Artistic Auditability


When you are designing software that updates data based upon business rules, or that creates new rows of data from somewhere else, be sure to pay attention to the creation of a useful audit trail. Yes this is more work than just updating the data or appending new records, but taking the care to do so has several benefits.

Folks in the financial or security industries use the terminology of Audit Trail to identify who accessed or changed which records, and when. A business audit trail however can be a more useful resource than for just tracking down suspicious activity.

Audit trails are useful both at record-level granularity and also at the file-level of detail. At the record level track the date and time that rows got created or updated, along with the name of the process performing the updates. Along with a timestamp, updates to master tables should also include the user ID and their IP address.

When you are making a global change behind the scenes programmatically to a file, create an easily identifiable annotation in a new, otherwise unused field.

If you are executing complicated business logic that determines activity that could have material impact to a customer, store the intermediate values along with the timestamp and version information; be sure to save this information for both affirmative actions and those that, in the end, result in no activity (sometimes the questions are about why things didn't happen).

For file level auditing it is often useful to have a logging table to track the batch processing of files, indicating a timestamp and overall record counts. Files being passed between systems should have a trailer record that contains both a record counter and a version indicator from the program that created it.

Audit trails assist in ferreting out performance problems and flaws in applications. They are also useful documentational research tools when a Support department needs to explain why a particular behavior occurred (or did not occur).