Most new development work starts with ideas or directives or change orders or functional specifications. From this comes diagrams and wireframes and state diagrams and use cases and UML. From all of this documentary scaffolding one would think a person could just sit down and start coding. But for design work that is more involved than just the standard form-based update to a database, one more step will be helpful: the pseudocode.
In carpentry there's an old maxim "measure twice, cut once." Pseudocode does this same pre-measurement for you when you are programming.
Consider for example this recent change order I received: "Send the file Monday through Friday night at the time specified in the parameter unless the Stop flag is set in the database. Allow an override flag in the database to send the file immediately without otherwise affecting the schedule."
It looks simple enough to describe, but once you start programming you will trip yourself up without some pseudocode that considers the last time the file was sent and the actual time of day. Don't be shy to create the pseudocode if you have any doubts about any boundary conditions. And review the pseudocode with the user to help prevent any unintended misunderstandings.
Monday, December 14, 2015
Tuesday, November 17, 2015
In the corporate world it's hard to get any significant work assigned to you unless you can arrive at the correct mix of accurate and acceptable estimations.
The method that works best for estimating depends to a great extent on the size and scope of what you are developing. Not that surprisingly the magnitude of error in your estimation also increases in parallel to the size of the project. Small projects such as one shot utility programs are best estimated by your gut feeling from experience. Such tasks rarely last more than a couple of man days.
Adding a bit of functionality to an existing system can be adequately estimated by breaking the project into component estimates. Say one day for database design, a day for user interface, a day for testing, tweaking, and modifications. More or less depending on your development environment.
If you are fortunate enough to be developing a middle sized complete system from scratch, then you can use a standard project management tool, fill out tasks and estimates, then resource leveling, addition of slack, and then add an additional 15% for each resource assigned to the project.
Somewhat counter intuitively estimation for exceedingly large projects works better if you don't go through the effort of summing up the cost of individual component tasks. Very large projects are dominated by the costs associated with inter personal and inter system communication, plus the overhead of change control. For that reason a tool based on the well researched interplay of these factors, such as Cocomo, gives the best results.
Estimation is an art that you develop mostly from experience. Remember to use the method that is appropriate for the overall size of your project.
Wednesday, October 14, 2015
Sometimes when you are dealing with the high politics of a system that might have major impact on an industry, you may run into a wall of confusing interests. How did the status quo come about, what are all the vested interests, if you change a practice what else might be affected? I find that when I have allowed my hips to be buried in the muck, first I doodle for about fifteen minutes, and then I set myself down and draw a Reality diagram. This is a concise way to describe, on a single sheet of paper, all the psychics impinging on a project. It is comprised of five quick sketches, each a slightly different flavor, describing various elements and relationships surrounding the projects "reality."
The first diagram of Physical space, shows a hypothetical activity that encapsulates a typical event ranging across a variety of actors. Man A stabs man B, who seeks help from doctor C. Okay relax it was an accident. B works at factory D.
The second diagram shows Ethical Space, basically who has spiritual "claims" on others for their actions (or omissions).
The third diagram shows Contractual Space: the understandings, paper, and legal relationships between the parties.
This fourth diagram is more of a list, a high level classification of the types of information each party keeps that are relevant to your system.
The last chart shows Financial space: simply how the money flows between the parties. So now you've got a piece of paper in front of you that succinctly summarizes the entire Reality surrounding your project. Does this solve anything? No. Does this tell you what to do next or make your life any easier? Probably not.
Yet I have found that this diagramming is an invaluable tool, because it shows you why things are the way that they are, the nature of their balance, and the interplay of the forces between them. It frees you from the muck because it shows you the difference between the hard and the soft limits. So unstick yourself: get real!
Wednesday, September 16, 2015
With marching orders and a vague foggy vision of where you are headed, you are ready to create the Meta-Project. Not the project itself, the Meta-Project enwraps the actual project creating an environment for it to succeed. The Meta-Project is more of a thought process, something you keep to yourself, in your heart, to guide you to a successful completion. In an earlier post I described how to resolve the utility-cost-speed triad; this is the other half of the mental preparations required to launch a project.
To start you need to create the tone, the environment under which your project will proceed. Sometimes you will need to be methodical and professional, moving slowly with each step well-documented. You may need to get management reaffirmation and written approval all along the way. At other times you may need to create a blender, a whir of activity that juggles six balls at once. You may need to generate enough excitement that your coworkers get overwhelmed with enthusiasm and complete the project for you. I can not give you guidance as to the appropriate tone for you to succeed in your project -- you need to use your intuition, foresight, and knowledge of the culture at your company to determine what will work in your environment.
I usually find it helpful at this point to sketch out figures of my thoughts. One such drawing is a graph of "risk space" -- along one axis I label Risk, and along the other I label Volatility. I then draw circular regions that indicate representations of where each design option would land. Some projects are risky because they have a high probability of technical failure: this may be due to an increased complexity brought about by the interaction of multiple layers of software, or perhaps because the development language itself presents certain challenges of learning and implementation.
Some projects are risky for political reasons: they might for example shed light on why certain business units of a company are performing poorly, or the software may replace the functions of another system that already keeps several staff members gainfully employed. Some projects may be risky because the goals are ill-defined, or no single person has enough authority to assure completion, or the people in authority are themselves insecure and may not be around at the end of the project for support when you need them.
Volatility is intended as an estimation hedge. The risk for any one implementation may run from low to high: if all the factors are well known then the volatility should be low. On some projects you will get the gut feeling that you are in for a barrel of surprises however, where most of the problems will surface during development or after implementation. In this case the volatility is high. "Type 3" projects tend to be volatile, because you can expect to incorporate new technologies and deal with new staff as the project develops. Some small projects can be surprisingly volatile, not for anything inherent in the design process itself, but because of business uncertainty. If the corporation is undergoing severe changes, then smaller projects tend to get swept under the rugs and scrapped easily.
By preparing yourself thoroughly mentally before a project gets under way you will glide through all of the challenges that it presents.
Friday, August 14, 2015
A typical windows form has tons of small visual cues that you can leverage to provide ultimate courtesy to your client. To start with consider the lowly mouse-pointer icon. You can of course change this to different icons under different circumstances to signal various conditions. Always change the pointer to an hourglass if your process is running a thread that disables user input. Change it to a question mark when they hover over a label, and then provide help about the field when they click on that label.
Use the status area strip at the bottom of a form to show messages about the internal tasks you are performing. These should be detailed enough to provide helpful information should you need to debug a client's problem during operation.
A mouse has two buttons usually... take advantage of this by providing right-click context menus where appropriate.
When you expect data to be entered in a specific format, overlay a graphic example (mm/dd/yy).
Make the effort to disable buttons, checkboxes, menu items, and fields when they are logically unavailable. If a field on a form is required for entry before the submit button can be pressed, then that button should not enable until after the required field has been entered.
Always indicate which fields are required in a manner that is both clear and consistent, yet discreet.
Take advantage of the title bar: it can show both variable words and an icon, and these display when your application is minimized.
When a user hovers over a field, they may be confused about what to enter. You may pop up a short callout of information, or even a callout help-button that will display a sub form.
Finally on any operation that require extensive processing time, display a progress bar showing the estimated time to completion. Run the long task in a background thread so that the user interface can still be moved or minimized. Provide a button to cancel any long-running operation.
Correctly handling all the small visual cues requires quite a bit more effort during development, but doing so marks the difference between a hack and a professional developer.
Wednesday, July 22, 2015
In a earlier post I wrote about how to monitor performance, but how do you plan for great throughput from the start? In the modern world of distributed services and multicore computers you can improve performance considerably by careful use of threading. Before jumping in and firing off threads for everything however, consider when they can actually be of most use.
First, if you fire off a larger number of threads than the cores on the client, multiple threads don't really gain you any advantage. Even with multiple threads started, the processor can only have a fixed number of processors "alive" at any given instant. You won't gain much from multiple threads on processes heavily dependent on SQL back ends either: the database engine already multithreads its queries. But other situations could be good candidates for threading.
If you need to repeatedly call external services for example, or are waiting for data back from remote servers, then using multiple threads can be a lifesaver.
Earlier I wrote more about what to stay alert for to keep your processes threadsafe, but regardless, a general guideline is to tightly manage the quantity of threads that you allow to execute simultaneously.
When you encounter an error in a thread's initiator you need to remember to safely bail out all of the threaded children from their initiator, something that I also discussed in this post. Threads can be tremendous performance enhancers if used carefully and artfully.
Saturday, June 13, 2015
In all my years of software development, I've never done anything the same way twice. Not because I didn't want to, but rather just because it was inappropriate. When you are starting a project you encounter not only a cascade of requirements and personalities, but also an onslaught of new technologies. You will therefore need to perform a balancing act incorporating a good bit of intuition.
One of the challenges of full-metal analysis is that three conflicting methodologies all need to be both performed and resolved. A sociological analysis considers the impact of the new design upon people and business processes. An object oriented analysis considers how to design abstract objects to adequately represent the business methods and data properties in a manner that will remain flexible. And a strategic analysis considers the appropriate strategic path of the company, path of the IT department, and ROI for the project.
Then beyond the analytical design, what languages, methodologies, and technologies are suitable to the task? What languages and technologies have industry momentum? Then using your foresight, what language and methodology will you wish you had chosen four years from now? Earlier I discussed in greater detail artful Vendor Selection; you need to recognize that developing a strategy to evaluate hardware and software vendors is every bit a piece of system development as writing the code.
When an executive looks into your eyes to see what you will do for the company, you either telegraph your future or you don't. Many times the only way to successfully develop a software product is to link to yourself in the future and ask for guidance for what you should do in the present. Foresight is the most important trait of a Systems Analyst or Software Architect.
Friday, May 15, 2015
I suppose no matter how much somebody tells you, regardless of their advice, until you have the unfortunate happenstance yourself you're not going to learn except by experience. I'll share the story anyway... ignore it at your own peril. The advice is simple: save your source code in three places. Yes, three. Why?
A long time ago at a faraway company I had the pleasure of developing a state-of-the-art docketing system for a corporate legal department. I did all of my development off the C drive, and then after making any major change I would pull out my 5 1/4 inch disk labeled "Legal System Backup" and copy the source code over to it.
Well the inevitable happened: I had a hard disk crash. No problem, right? At least I have a backup. But after swapping in a spanking new hard drive -- holy O'Reilly: I can't read the floppy disk. Naturally about a week later a clerk in Legal wanted to add a new feature to the system. I could only shrug. I'm so sorry: we lost the source code.
Make three copies: save one or your disk, one on the network, and one in your off-site email box. Because there's no excuse for losing the source code.
Most modern developers work in an environment with built-in source code control, such as TFS or CVS. Occasionally though I've known companies that view their SQL views and stored Procs as something less than legitimate source code. Perhaps they feel a database backup is adequate coverage for the intellectual property.
Your source code (and SQL stuff) however provides an additional benefit to your employer besides being the cogs and gears that make everything work. Its /history/ is valuable for researching bugs. So make sure you not only keep three versions of everything that is currently running, but also keep all of the prior executed versions as well.
Thursday, April 16, 2015
It’s difficult to be a successful developer unless you keep prodigious notes. Notekeeping can be an art all its own: it’s really counterproductive at the point of code development if you find yourself with a huge pile of disorganized Post-Its of the use-cases.
While you are in the midst of designing software the useful information pours in across scenarios with a wide variety of formats. You may hold a meeting where everyone is talking. You may receive a PDF specification from a vendor. You may pick up the phone and chat with a colleague to answer some questions. You may shoot off an email that causes a cascade of responses. You may read something relevant on the internet at home. How can you possibly keep track of all this information in a sensible fashion?
Many of my developer friends like to keep spiral notebooks, but I’ve never been able to easily find what I’ve written previously. I prefer a mixed approach. I use a PIM (personal information manager) to gather and automatically categorize short sentence snippets of concise information. I keep folders organized on the network for documents. I also keep hardcopy in manila folders for documents that I think will have lasting importance. I keep a folder in Outlook for each gigantic-scale project. And I use Google Desktop to find my way amidst all of the detritus.
In meetings I take a single loose leaf of paper and then either transfer my notes afterward to my PIM or a more formal document to mail or file on the network. When I'm not at work I also *always* carry a Kindle for any spur-of-the-moment revelations.
Notekeeping is complicated: experiment with different methods and software to find something that works efficiently and correctly for your own style.
Wednesday, March 11, 2015
In the many years that I've done software development I've noticed that systems written in-house tend toward two extremes. On one side you have useful software designed for a small quantity of people to do a very specific task, and on the other side you have somewhat clumsy software designed for two thirds of the company to support a wide variety of operations.
The smaller, task-specific software has tens and hundreds of flavors, yet it only survives three or four years until it gets replaced by a new incarnation. The multipurpose clunky software has just one flavor but seems to live for fifteen years, usually well beyond its prime. How come there is no middle ground?
Well just like in animation, sociology creates an "uncanny valley" of in-house software that doesn't comfortably exist. And this is due very much to the nature of people and the work that they do. In most jobs people tend to leverage certain knowledge and skills to support the company with specific tasks. A capable software designer can create very nifty systems that can maintain incredible complexity with the understanding that a moderately competent user will be trained on its use.
Such systems can be local successes even though they don't translate well to other users. And they don't survive for long because they don't incorporate the dynamics of both system-level and human-level changes. People change jobs and don't communicate all of the knowledge. Small systems fall into disuse because they are people and knowledge specific. They are still good for what they are good at: improving local productivity.
Large systems live past their prime for similar sociological reasons. The rate of change in people-skills overwhelms the complexity of linking processes. Linked processes become unlinked, and the training materials don't keep up with the changes in the software.
Large systems still serve a useful purpose however, to the extent that they organize people to work together. Where does the personality of a corporation really exist? In between the modes of small and large software.
Wednesday, February 11, 2015
In an earlier post I discussed overloading a method to handle changing it from a computation based on one measurement to having it be based on two. Another way to handle this is with inheritance: you create a new object, say "modernCar," that inherits from myCar and then implements the overload in its method. In this way developers using the old routine use myCar.getCurrentMiles, and developers using the new routine use modernCar.getCurrentMiles.
Using this approach makes more sense than a simple overload when the new way of invoking the method clearly applies to only a selected subset of the base class.
Yet another approach for handling change is to "propertize" the method call. Basically this means changing all of the arguments to the method into class properties instead. To be clear and consistent in this approach the method should place its result in a public property as well, so the sequence for using the new method becomes set miles1, date1, miles2, date2, invoke the method, and get currentMiles.
This approach has the distinct disadvantage that it causes a hard break to existing code. It also requires clearer technical documentation explaining its use. On the other hand this approach has the two distinct advantages that future properties (both set and get) can be added later without impacting the code base, and it also encourages future developers to actually review the target code before invoking the method.
So far then we've seen three approaches for managing change: overload, inheritance, and "propertizing". In a later post I'll discuss your fourth option, deprecation.
Monday, January 12, 2015
Somewhere in the deep guts of a project you enjoy the "pleasure of unit testing." Typically the dead center looks like:
Approval of interface design
Various programming tasks
Software corrections from unit testing
System and stress testing
The nice thing about unit testing is that it gets so buried into the guts of a project that the managers don't pay much attention to it. Despite this however, the unit testing ultimately determines, unequivocally, the quality of your product.
Just because no one is looking doesn't mean that you should slack off from the unit tests. If your software is modularized, which on a larger project is most probably the case, then you can unit test as individual modules become available.
Earlier I discussed Artistic Test Harnesses, and how they not only allow for rapidly repeatable unit testing but also provide a failsafe for workarounds during implementation. Be sure to read that previous post to learn how to construct these useful utilities.
Recognize that Quality always involves work: there are no shortcuts to achieve it. It turns out that it is mostly the hidden, unmanaged, personal part of your work that primarily determines your long-term success.