computer a tremendous spectrum of storage
capacity versus access time variability. All of this is effected within the
computer through stored sets of instructions called programs.
In the previous
chapter, we suggested that threats to computer systems could be mitigated by
hardware. If the architecture and construction of computer systems are well
understood, then trust can be derived from that level of causality. However,
this presupposes that software running on the hardware can be understood in a
similar fashion. Experience has shown us that establishing this understanding
requires significant procedural integrity during the software programming and
installation processes. In the following sections, we will consider a very
cursory overview of some of the salient developments along the paths of
operating system evolution. Our purpose is certainly not to offer a history of
these developments; our survey is much too terse and spotty for that. Rather,
we simply want to give some contextual flavor to software development in
general, and operating system development specifically, to support our
consideration of computers and computer networks as extensions of social
ecosystems.
The earliest
commercial computers such as the IBM 650 or IBM 1620 were very much single user tools,
on the order of a table saw or a lathe. Developing and running software on
these machines generally required complete control over the system on the part
of the programmer, who served also as the computer operator. The languages used
to define some series of processing steps tended to be very close to the
sensori-motor environment of the computer itself. On the IBM 1620, usually the first indication that
a program had a problem was when the large, red Check Stop light came on. It
was hard to miss, positioned as it was on the main control panel of the
machine. It indicated in the strongest terms that either the computer was not
able to do what it had been instructed to do, or it did not know where to go
for its next instruction.
Since
evolutionary processes have a tendency to build through enhancements to
existing mechanisms, rather than replacing them whole clothe with a better
approach, it might be useful to walk through the early steps of making a stored
program run on the earliest computers. The point being that our most advanced
systems today generally perform many of the same operations. These instructions
have just been ground into the structure of the newer systems and we only see
the more profound results of lots of these primitive baby steps. One might
think of this as the computer equivalent of the biogenetic law: ontogeny
recapitulates phylogeny. We mentioned in Chapter 4 that this law is actually
not fully true for biological systems, and it is not universal for computer
systems as well; but there is enough validity in the concept to warrant the
comparison. Relative to biological systems, the initial thought is that the law
applies to the embryonic development of an individual of a species while, in
the case of computers, the observation applies to the powering up of a modern
computer system. So, let us consider some of these baby steps.
Input of information
to the IBM 1620 was typically through punched
cards. One punched in the desired programming steps into a series of these
cards, creating a card deck. As cattle graze in herds and whales swim in
pods, so cards live in decks. The language used to convey these programming
steps was generally an assembly language. This form of language is
barely one-step removed from the pure bit patterns that defined the command
structures in the most basic form of a computer’s binary representation; its machine
language. The term assembly language is not terribly colorful. Not that
other terms in the computer world are particularly exciting or illuminating from
an aesthetic viewpoint, but assembly language just sits there. It seems
somewhat
|