home
table of contents
Comp Sci
January 2004
email

A strategy for developing programs

Here is what I preach, use, and have found to be successful is small to medium large projects.

Create the framework executables at the very beginning. The major initial design effort is devoted to selecting the appropriate framework. Think of the software as skeleton and flesh with the skeleton being recursively extensible and the flesh padding out the skeleton.

At all times there is a reference set of working software with a corresponding set of validation test suites.

Change (implementation of development work) is always done modularly under both change control and version control. Revisions to validation testing are done concurrently as part of the change control process.

The essence of the approach is to have functional software at all points in the development process and evolve the functionality to the desired functionality. Change control is essential because the objective is to treat the development as a sequence of transformations of the software state, with each transformation being a well defined alteration in the definition of “working software”.

Here are some sample project sizes and types:
ABM radar simulation — about 40 people in two development groups separated by 12,000 miles, FORTRAN, CDC update.
Radar Data Analysis package, ~50,000 lines PL/I, VM/CMS, two people.
Database and Version Control Repository, ~80,000 lines of C, two people.
Real time seismic analysis for geomarine data, 10,000 lines assembler, 1 person.
Numerous one person projects 2-10K in size, sundry languages.
Tools — editors, compilers, code analysis tools, version control software. Nothing special beyond that.
Environments — industrial, scientific. OS’s, varied. Sophistication of tool set, whatever is available.
Machines — PC’s workstations, mini’s and mainframes.

This is essentially a small team/one person approach. I allow as how it might not be appropriate for multi-million line monsters. My experience is that it is highly productive for small-medium size projects. The schema breaks down into phases:

1) High level system design — essentially a decision about the global architecture of the software being built.

2) Implementation of the high level design in an equivalent skeleton. This implementation is heavily stubbed with entire modular sections being no-ops.

3) Implementation of essential services to make the software “work” i.e. to make the designed I/O work. This is a preliminary implemenation which is designed to be thrown away, since key services do not yet exist.

4) Implementation of simple, throwaway equivalents of key bottom layer services. [Unless, of course, one can resuse components from other software.]

5) Implementation of service subsystems in order of need.

6) At this point the skeleton and the service structure is in place, along with an initial implementation of part of the desired functionality. Flesh in the rest of the desired functionality. [Design allows for horizontal extension in modular stages.]

7) Iterative replacement of throwaway components.

8) Iterative optimization.

Important points not mentioned in the above schema. (1) At all points the code should be internally instrumented [tool independent coding]. (2) From the point of creation of the initial skeleton onwards, all changes are made modularly, and the testing process evolves with the software.

An important caveat is that this process really doesn’t work unless you follow good S.E. practice. I.e. you *must* practice strict modular decomposition so that pieces can be added independently. I do not regard this as a fault.


This page was last updated January 1, 2004.

home
table of contents
Comp. Sci.
January 2004
email