RL and Podrock”s discussion on how to unpack Digital TErrain Data brought to mind how it used to be done, back in the olden days of the first LANDSAT satellites and dedicated Image Processing/Remote Sensing systems, in the late 1970s and 1980s. In that remote era, the imagery data was delivered on 9″ tapes and we were expected to write our own software to unpack it.
Logically, the data was organized as lines (stacked up from top to bottom of the image in the Y -direction) and samples (left to right in X), starting at the upper left corner. Each line was a physical data record on the tape, and each sample was a word in that record. So for example, the (200, 100) pixel was the the 200th word in the 100th record of the file. Each data record was physically separate, so you didn’t actually have to know the record length. Each READ or WRITE Fortran I/O instruction worked one record at a time. The idea was to put your I/O statements in a DO-loop and drop the data into an array in memory so you could process it faster.
The word length had to be figured out by the programmer who wrote the program that read the tape, unless it was written down in the printed documentation that arrived with the reel. Sometimes, if you were lucky, the first line of the image contained character bytes that could be interpreted to tell the data format as well as other metadata. Word length could be bit, byte (8 bits), integer (16 bits), long integer (32 bits), real (32 bits, 16 for the number before the decimal point, and 16 for the digits following it), or double precision (64 bits).Since in most images, large stretches of adjacent terrain pixels vary little in brightness, if you could dump a few lines in hex you could generally make a good guess as to what the format was.
Digital terrain data was generally distributed as long integer or real data, with each pixel value representing a ground elevation in feet or meters. You needed the longer words to handle the elevations realistically found in nature. Most satellite imagery was byte (0-256) data since it simply represented a gain level on an infrared or visual photometer. SAR data was usually real or DP.
If the data was multispectral (multiple colors), there were a variety of ways it could be formatted. Most common was band-by-band, where each file contained one color, and the data was registered geographically by pixel location. In Thematic Mapper, this meant 7 files per scene, R,G,B, and 4 infrared bands. There was usually an additional band of panchromatic (B&W) data from a television camera mounted on the bird, but it never seemed to carry much useful information and we rarely used it.
Other formats included line-by-line (where each band was on a subsequent record), or pixel-by-pixel (where each subsequent word came from a different sensor).
The procedure was to mount the tape, run your I/O program, pointing it to the tape drive, and sending the data it read to some file, unscrambling it depending on the format. The disadvantage over the modern system was you had to know how to program. The advantage was you didn’t have to hassle with a multitude of input formats and proprietary software and getting different programs talking to each other. And writing your own I/O routine only meant learning a programming language, an editor to write it with, an operating system for file manangement and a few simple OS commands to compile, link and execute it and to access and read the input and create and write to the output files.
To me, the DTEDs always looked like microscope slides of brain tissue when displayed on the monitor. The drainage patterns on the ground resembled the folds of the cerebral cortex.