Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts

Friday, June 16, 2017

Artistic Test Harnesses


Modern object-oriented development is primarily cobbling small pieces of reusable functionality together. In the midst of development — during Spring training when you are hitting, running, throwing, and batting all day — you implement very specific business rules in small modules. Given certain input conditions and certain characteristics of the underlying data the invoker expects you to return the right properties or accomplish a persisted change in the data with a specified method.

Quite frequently the classes doing the heavy-lifting work behind the scenes without any visible interface to the user. How then do you make sure that your modules work properly? And how do you verify they work correctly under different conditions when called by different classes and when the underlying data changes? This job my friend calls for the use of a test harness.

A good test harness is like an automated batting cage where you can walk in, press a button for sixty mile per hour pitches, and have a machine throw consistent strikes.

Essentially a test harness is an ugly, quickly thrown together form with a handful of buttons and data-bound fields (or perhaps a data grid) that allows you full access to all of the ways possible to invoke your methods and set or get your public properties.

I like to use a tabbed interface when I’m building a harness; frequently I develop several objects simultaneously as part of a larger library and each tab tests the features of one of the objects. The complete form thus covers a large chunk of the library. Click a different tab to have a slower or faster pitch thrown toward you.

The best part about a harness, aside from the convenience it affords to unit testing, is that in the throes and craziness of implementation you can use it for quick and dirty fixes (invoking methods directly rather than bothering a fully implemented interface). Yes you can always take a shortcut and set debug checkpoints to carry out unit tests. But the extra flexibility and thoroughness afforded by a test harness, even though it requires an extra day to throw together, is always well worth the effort.


Monday, August 15, 2016

Artful Independence


If you design things from the start with testing in mind, then your finished system will be both stronger and easier to maintain. One way to accomplish this is to provide a "testing" checkbox (or menu pulldown) in your user interface. Doing so causes a complete design cascade in how you think about development.

It sort of provides you the same comfort level as if you were slipping behind the wheel in a Driver's Training car. It feels like authentic driving but with the added safety that if you forget to step on the brake you actually won't damage anything, as the instructor is watching out for you anyway.

It also allows you the wonderful opportunity to test production interfaces with a development-version of the backing database. Using this practice also promotes looser coupling between components. Of course, make it abundantly clear on the interface (say by changing the global form background color) when the client is in test mode.

This design method does incur some additional logical overhead. Usually you will set a "Testing" Boolean and have places where you evaluate it to set, for example, the database connection strings. This means you also have to provide both the production and development paths and parameters in your config fie.

The saving grace with this approach is that it pretty much clears you from the embarrassment of the all-too-frequent mistake of accidentally copying the development parameters into the production installation. So declare your independence from installation woes: code for testing from the outset and make the development configuration selectable from the interface.

Friday, February 12, 2016

More Artful Testing


In the software development world we have a whole cabinet full of testing tasks that can contribute to quality work; here is a list of those most commonly used, in an approximate order of where they happen along the development cycle:

Impact Analysis
Usability testing
Code Review
Risk assessment
Stress testing
Test bed preparation
Test case creation
Test driver development
Functional testing
Regression testing
Boundary Value Analysis
Branch coverage
Error seeding
System Test
Parallel testing
Documentation Review
Integration testing
Acceptance testing
Post implementation validation

Therefore all of our software should be perfect and bug free. Right? In the real world our ability to spend time and money on software testing gets constrained however by the dynamics of staffing, skills, and delivery schedules. So much as I described in this earlier post about appropriate balance, we must carefully choose which test tools to remove from our cabinet to apply to each project. And that my friend, is why software still has bugs.


Monday, January 12, 2015

The Art in Testing


Somewhere in the deep guts of a project you enjoy the "pleasure of unit testing." Typically the dead center looks like:

Approval of interface design
Various programming tasks
Unit tests
Software corrections from unit testing
System and stress testing

The nice thing about unit testing is that it gets so buried into the guts of a project that the managers don't pay much attention to it. Despite this however, the unit testing ultimately determines, unequivocally, the quality of your product.

Just because no one is looking doesn't mean that you should slack off from the unit tests. If your software is modularized, which on a larger project is most probably the case, then you can unit test as individual modules become available.

Earlier I discussed Artistic Test Harnesses, and how they not only allow for rapidly repeatable unit testing but also provide a failsafe for workarounds during implementation. Be sure to read that previous post to learn how to construct these useful utilities.

Recognize that Quality always involves work: there are no shortcuts to achieve it. It turns out that it is mostly the hidden, unmanaged, personal part of your work that primarily determines your long-term success.


Tuesday, October 9, 2012

Artful Stress


How much can you pump into that water balloon before it pops? Wouldn’t you like to know this before you get into the water balloon fight? Stress testing software is both an art and a science. Some of the problems you are trying to ferret out are sublime and intermittent and may depend upon environmental loads that are outside of your direct control. To run a fully representative stress test therefore you may need to wait until your full production environment is up and running.

At the same time though, if you delay this crucial piece of quality control until late in the development cycle then you risk being so locked in to an architecture that you are prevented from making effective changes.

To avoid such lock-in I like running simulated stress tests as early as possible. Once you have the physical database designs and user interface for updating the data, simulate the expected load and see where locks and waits occur. If you rely on outside web services then load them up with a few times your expected average use to see how they respond when under peak loading.

Maybe consider extrapolating from some smaller balloons before you fill up your real soaker, eh? Complicate the process by running large concurrent batch jobs on the same server or run a virus scan, a backup, or a defragmentation utility.

Another effective way to avoid scalability issues is to restrict developers to an undersized server, limited in speed and memory. Be aware however that although this is a fair test of design efficiency it's not really a fair test of deadlock avoidance or failover cascades. Often these have to be directly simulated with deliberately bugged code to make sure a future developer doesn't inadvertently pull the whole system down.

Artfully stress testing your system early will make you sleep stress free once it is in production. And keep you from getting soaked in hot water later.


Friday, May 7, 2010

Artistic Patching


Most of the time when you're developing software you get buried in writing code to flesh out a bunch of business requirements, you set up a test desktop environment, and then slave away and work through the bugs. Then you tie up a handful of loose ends, catch up a bit on your documentation, throw together some training notes, and then finally the big install. Ach, it doesn't work in production. Nothing can be more frustrating.

You go back and check your setup notes, make sure all your key tables have the right data, make sure all the network paths are mapped and exist, triple check your configuration file settings, and still no-go. What the heck could it possibly be? All the DLLs look correct, all the web services are up, and as far as you know all your cohorts installed their respective pieces.

When you have reached the end of your rope it's time to think outside the box. And the part that you are overlooking is simply this: you need to apply all the service packs and Windows Updates on the production boxes. Yeah the SQL service packs as well.

At times it seems like a royal pain to hassle with software infrastructure that works as the foundation that you seldom actually think about. But the whole reason that you have development tools that have the power that they do is because they build upon this humongous hidden foundation. Do you want to avoid installation problems? Practice the art of keeping your core software fully patched.