of Concepts, which
one of the authors (Bertrand) has published with Yi Mao at the Third Conference
on Experience and Truth in 2006. While the paper is less accessible than the
presentation herein (imagine that!), it follows the canons of publication in
the field of logic (using Richard Montague’s Formal Philosophy) for the phenomena that we just described.
Thanks to
cognitive languages, computers have now means to venture into areas previously
reserved to humans. We are getting ready to come back to the processes of
religion and sciences that we discussed earlier. However, we will first look at
the beginning of it all, the provisioning of the capabilities that we now
recognize in both human and computers.
With the
emergence of secure core components in the late 1970’s and early 1980’s, a common
software paradigm was followed for the development of secure systems. Within
this paradigm, the secure core was envisioned as a component of a larger
system, a component that could be used to establish the presence of a person at
a point of interaction with some degree of assurance as to the integrity of
this presence. In other words, the secure core was a mechanism to enhance the
level of trust in the ensuing interaction.
The secure core could
be made part of a token, for example a card that was carried on the person. The
software found on the token was designed in specific relation to the larger
system. The information stored on the token and the operations performed on the
token were part of the context of the larger system. The token was typically given
a physical interface through which it could then be accessed at a specific
interaction point. The details of this physical interface might well vary from
system to system. This approach tended to minimize, if not completely eliminate
the possibility of using a specific token in multiple systems. The earliest
incarnations of such a mechanism were obviously similar in characteristics to a
door key, given that this is the model on which the token is based. One can
have a system of arbitrary size and complexity locked away in a room behind a
door. If the key to the door is presented, then the door can be opened and the
full system exposed. Without the key, the system remains inaccessible.
In the course of
deploying a number of systems in this manner, many similar problems were
identified as characteristic of the use of tokens, independent of the larger
scale system that they were used in. Rather obviously, one of the first
recognized areas of commonality was the need for consistent physical interfaces
between system interaction points and tokens. This led to the development of
standards for interface that were used to connect tokens to general-purpose
systems that provided the services enabled by the tokens. Many purely software
level commonality issues were identified as well. For example, the need to
establish some sort of authentication linkage between the token and token
bearer was necessary in order to prevent a token that was lost from being
easily used by a different person. A major goal of the use of a token was
enhanced security of interactions, so such a linkage was important no matter
the details of the specific system in question. The mechanism that evolved to
meet this need was that of bearer
verification. When the token was connected to some interface device at an
interaction point, the token bearer was asked to enter a personal
identification number. This number was then conveyed to the token and compared
with a stored value that was, in fact, a secret number that had been placed
there by the token bearer when the token was first issued. The number was
ostensibly known only by the token bearer and the software on the token, so it
could be used to authenticate the identity of the token bearer to the token. While the general theme of
this operation might be similar from one system to the next, the details of the
operation were quite system specific.
|