comparison in our final chapter.
Software is written in a language that can be understood by the computer. The
ever-changing landscape in the computer world is the attempt to construct
languages that bridge the gap between the sensori-motor environments of the
computer with the sensor-motor environments addressed by the programmers that
create the software. A good programmer can think like a computer; a good
computer language sounds like a person talking to another person. Software
expresses the computer, and as such, is the mechanism that translates the
concepts of this book into practice.
At the
high-level at which we’re looking at computers, we need to add the concept of
an application framework. Such a framework specifies how the software
needed for a particular task is installed in the memory of the computer. In a
very large sense, this is how computers acquire new skills, or, actually lose
them if the application is removed. Applications are programmed to build on the
innate capabilities of the computer to introduce new processing of input and
output information. Here we are using the word programmed in exactly the same
way that we referred to the establishment of acceptable stimulus-action
responses within the human brain. We view the programming or training as coming
from an external source and, through some type of positive action, imprinted
upon the controlling, cognitive elements within the computer.
We noted in the
second chapter that the early decades of computer development and deployment
saw reciprocal effects in the size and number of computers. Early computers
were big, bulky and relatively few in number. Over time, the computers became
smaller and subsequently much greater in number. The connection between the two
effects illustrates the variability of the ecosystems in which computers
operate. With larger systems, their sources of food, that is the energy to run
them, and the amount of it needed, formed a distinct limitation on where such
machines could be located. They required considerable logistical support for
their operation as well; air conditioning systems to keep the systems from
melting themselves and, perhaps more important, from cooking the people
required to operate them and to keep them in a proper state of repair.
A rather typical
computer circa 1965 would require a room of a few hundred or a few thousand
square feet in area and a staff of perhaps ten to twenty people to keep it
running around the clock. Such machines were expensive to own and to operate.
The net result of this large, logistical infrastructure certainly limited the
mobility of the computer (once they were put in place, one simply did not move
them without exceptional cause) and consequently limited the types of problems
to which they could be applied. With the advent of the transistor and
subsequently the invention of the integrated circuit, the size of computers
began to decrease and with each diminution came a new realm of use for the
machines.
The evolutionary
process of natural selection applied itself quite well to the emerging, smaller
computer systems. Each design iteration provided systems that could be housed
in more varied locations, required fewer and less specialized operators and
were subsequently applicable to a much broader range of problems. The noises of
the dinosaurs as they became extinct were noteworthy in their lack of
comprehension; “Why would anyone want a computer in their home?” Ken Olsen,
founder and chairman of Digital Equipment Corporation, reportedly questioned before
his company folded with the advent of the very personal computer he decried.
In 1959, future
Nobel laureate Richard Feynman made a classic presentation entitled There’s
Plenty of Room at the Bottom in which he suggested the possibilities
inherent in making all of our machines smaller. In contemplating the usefulness
of miniaturized computers, he noted a lack of understanding of how to
accomplish the miniaturization in a cost effective manner at that time, but
|