I’ve been a computer user on the job all my working life, but I actually earned my living as a professional computer programmer from the late 1970s to the early 1990s. I worked as a scientific/engineering programmer writing Fortran applications on IBM mainframes and HP and VAX minicomputers, mainly in support of image processing, remote sensing and computer graphics applications for the earth resources and defense industries. I know this does not make me an expert, a lot of water has flowed under the bridge since those days, but I do believe it gives me some insight into the field and perhaps that may be of some value or interest to you. This is pure history, perhaps even archaeology. I hope it helps.
The big deal in the 1980s was “Structured Programming”. It was a philosophy on how to write computer programs that was supposed to make code easier to plan and write, easier to understand, easier to document and therefore easier to maintain. This new doctrine was pretty much being crammed down our throats in those days, there were seminars and must-read books, classes and lectures, and we were expected to incorporate its precepts into our own work.
I initially resisted this, thinking it was yet another management fad whose main effect was to insert one more layer of administrative bullshit between me and getting my job done. But after a while I came to see that structured coding was a valuable strategy and it made sense and I eventually started including many of its ideas into my own programming. It did make my life easier, and it made for a better product. Now, I understand programming may be very different today, and I don’t know much about how its done now, so keep in mind the following remarks are mainly historical. Some of what I say may be already familiar to you, some of it may be new.
I had learned Fortran programming in college in the late 1960s, and although I wrote quite a bit of code, both for my academic work and to support my professors’ research, it can only be described (in the jargon of those days) as “spaghetti code”. I would simply start writing, developing new variables, and assigning memory, constructing algorithms and accessing I/O devices as I went along. It was a stream-of-consciousness technique, and a major aspect of the work was to make the use of machine resources as efficient as possible. We all tried to make our programs as short as we could, and we very often used all sorts of clever tricks to save memory, speed things up, minimize I/O operations and disc accesses, etc. Sometimes the tricks were so clever nobody else could figure out what we were doing! Even looking through old code I had written only a few months earlier, sometimes I would ask myself, “Why the hell did I do that?”.or “What the fuck does that variable DO?” Very little effort was made to do things in a way that would seem only reasonable today, such as giving meaningful names to memory locations so I, or someone else, could actually figure out what result was being stored or addressed there. This became increasingly more important as the technology progressed, and memory and processing speed became increasingly cheaper, but programmer time became increasingly more expensive. It made a lot more sense to adapt or modify old code, or to access previously written utility routines from program libraries, than to rewrite everything from scratch. It became essential to re-use code, perhaps by modifying old code slightly, or even to collect frequently used routines in utility libraries that others could access. Increasingly, most of our marketable product was not actually a program to solve a problem, but a set of canned and thoroughly tested and maintained routines that our customers could call from THEIR programs. Rather than solve technical problems directly, my job slowly evolved into developing and debugging and enhancing, that is, maintaining program libraries of pre-written statistical or mathematical routines that other programmers could call on to solve THEIR problems.
The task of managing these libraries and systematizing and organizing them required standards, documentation, and adherence to the idea that everything you did would probable have to be modified or improved by someone in the future, so it had to be easy to read, even by someone not familiar with the physics or math of the problem. It was the bureaucratization of technology.
This is where the Structured Code concept came in. Some of the techniques advanced by this philosophy were pretty obvious (by today’s standards, anyway). First, use lots of comment statements, to help some future programmer follow what it was you were trying to do. (For example, “//*Comment- Here is where we calculate the intermediate angle*//”.) Second, use meaningful variable names (“Uncorrected_Angle_Value”, not, “URAD”). Third, make the code neat, line up everything into blocks, like paragraphs, where related tasks are grouped together, and separated from other groups by spaces. It makes it easier to read, and organize logically. Use typographical tricks like margins and special symbols, indentation, spacing, capital and lower cases, and so on to make the code easier to read. Be lavish with the” //* Comment” statement, to help break up your code into isolated, meaningful blocks. Programmer time is much more expensive than the few bytes of memory you’ll save by eliminating a non-executable line of keystrokes.
But these are all cosmetic tricks. Structured programming involved more fundamental changes in how code was put together. Sometimes it was easier to brute force a calculation by breaking it into discrete steps, rather than doing it all at once in a monster equation. If you’ve made a mistake, or if someone has to add a step in the future, it will be easier to spot. Don’t use every single programmer trick you can think of, remember, the point of code is for it to be legible and understandable by a human, not efficient and fast for the machine.
The code should be modular, that is every section accomplishes one task. Every big task is broken down into smaller tasks. And so on, and so on…. If a subtask in one module is identical to another subtask in another module, don’t type in the same lines of code. write it up as a separate subroutine and have both modules call it separately. This way you only have to write it once, and if necessary debug it or modify it once. There is no point in solving a linear equation in every routine in your program when each routine can call the same linear equation solver routine.
Every single routine (module, subroutine, section) in your code should have only one entrance and one exit. You can nest modules all you want, though, as long as you clearly mark where each sub-module or call to a subroutine is (perhaps with //*Comments or some other typographical delimiter or indentation. One task/one module, and one entrance/one exit makes it easy to locate a problem, or to add additional capabilities if required down the line. Sprinkle your code at key points with tests to catch obvious errors so the user can be notified if one occurs, such as “Error, attempted division by zero in Stat Routine” or “Input number of degrees exceeds 360 in Course Correction Routine. This allows you to track down where errors occur, and where they can be traced to. When developing your code set flags and intermediate values which can be printed out to help you debug. When the code is finally running, don’t erase these checks, just “comment them out” so you can activate that error trap later and use it to find some unanticipated problem.
When listing in the software a set of similar capabilities or actions, assign or define them at the beginning of the module where they will be executed. Later on, if one needs to be added or deleted, you’ll know exactly where to go, and you’ll only have one edit to make. Always give yourself more space in a list than you need, in case you have to add something to it later. Define all your variables and variable lists (arrays) at the top of that subroutine so they can be easily located and edited.
Even the simplest and most primitive computer languages (like Fortran) give you plenty of control statements to direct the flow of logic through the computation. Don’t feel obligated to use them all just to show how clever you are. For example, avoid the use of a GO TO statement. The GO TO is often used to change points in in a process abruptly, which is convenient, but makes it hard to figure out just where you were when you jumped. If you have to interrupt a logical process, or jump to some other section of the code, don’t break out abruptly. Go to the end of that routine and set your flags before you jump out. Remember, each module should have one entrance and one exit. Otherwise, you can’t tell where the program failed, and you can’t tell where to add another option if future specs require that..
There are only a handful of logic statements that should be used in structured code.
The CASE statement – A variant of the GO TO statement where you jump to some other spot in the process depending on what some key flag or value is. Jumping around haphazardly in the code makes it hard to follow, but putting all your GO TOs in one place with a CASE statement makes it easy to figure out what is going on
The DO statement – used to repeat a process over and over until some condition is satisfied.
DO statements can be subdivided into two subtypes: DO…WHILE and DO…UNTIL (repeat this operation WHILE this condition exists, or UNTIL this condition occurs).
The IF statement – used to check a value and to do or not do something when that value changes.
IF statements can be subdivided into two subtypes: IF…THEN ( If X is true, do Y) and IF…THEN ELSE (If X is true, do Y, otherwise, do Z) .
The one activity structured coding encourages is modularization and nesting. Every separate action should be a separate module. That activity should be finished before the next is commenced. If the action is complex and involved, it should be broken down into smaller sub-modules.. My practice was to try to keep things simple by breaking down all my software into no more than five levels of subroutines. Remember, each subroutine may consist of many modules, each consisting of many submodules, and it is easy to get lost if the branches get too many levels deep. But a subroutine involves a physical call (by the operating system) from the calling routine to the called routine, it isn’t just a convenient paragraph break, its a different piece of software. Subroutines were often called from the system subroutine library, so where to make the formal distinction between module and subroutine was often an administrative decision, not a technical one. And of course, subroutines can call other subroutines, etc, etc. So this was the architecture I tried for (sometimes it wasn’t always possible!) There is no technical reason why I had to follow these guidelines, it was just easier for me to keep straight in my own head, and I felt other programmers in the future would quickly see what I was doing and get a better understanding of my code if they ever had to mess with it.
My main program was the Driver. Its task was to define the major variables and arrays and then call a series of subroutines which broke the problem down into its major parts, the major logical divisions of the task. The Driver did nothing but call routines, and most of the routines it called were only one level down. The Driver spot checks their results to see if they are reasonable, and shuts things down if they aren’t, declaring an error condition that would hopefully be useful to the user and debugger.
One level down from the Driver were the main program components, where each major subtask was processed by its own routine. These main subroutines would call others one further level down. I tried to stop here: Driver calls 1st level routines, which call 2nd level routines, and sometimes, in really complex programs, I might add a 3rd level below that. I tried, whenever possible, for each level of routines to be called only from the level above it, and to only make calls to the level below, In addition, I usually maintained an additional level, where I lumped all routines that might be called from any point or level in the hierarchy, This 5th level I called “the utilities” , and some of them were included from our shared system library. These were the routines that performed generic tasks, like doing sorts, solving systems of linear equations, preparing a histogram, doing statistics, and so on. Many of these were written by others, had been in use for a long time and were thoroughly tested. They were essentially bulletproof. If anything went wrong, it wasn’t there, it was in my code.
All of these divisions and subdivisions, as well as where to draw the line between an informal module (a logical part of a routine) and a formal subroutine (called with the aid of the operating system) are purely arbitrary. The computer doesn’t care, but the principles of structured code dictate that the code should be understandable by human beings if it is going to be able to grow and evolve in the future. That is the major justification for this type of philosophy and the resulting architecture.
That’s all I can remember, and I hope it all makes sense, and that you’ll find it useful. How much of these practices are now obsolete, and how many are still commonly implemented I do not know. I’ve been away from coding for a long time now. I would be very curious to find out. I do know there was a lot more to Program Structure than I’ve outlined here. The manuals were very thick. And very boring.
Once I made the leap from batch processing on mainframes to interactive programs on minis none of these principles changed, This gives me some confidence that these guidelines have some logical validity and are not just an artifact of the technology in use at the time.