Governance of
computer-based interactions derives from policy specification mechanisms that
are grounded in trust authorities and mechanisms that can be projected across
the network. The detailed architecture of individual computers and the details
of their network interconnections is a function of system administration.
The administrators, entities whose role is one of enacting
administration functions, derive their authority of control from their
identities as established within the trust infrastructure of the network.
The software
through which computer based social groupings are effected is generally divided
into three component domains: application software, operating system software
and middleware. The concepts of application software are based in the language
of the problems being addressed at any particular time. Operating system
software, on the other hand, is concerned with the sensori-motor environment of
the specific computer system in question. Middleware generally comprises some,
or all, of the protocols that connect these two domains together.
In the computer
world, there is a given with regards to any level of security: a trust
infrastructure must be based on a secure computer platform. To draw a parallel
to the human mind, a secure platform is one on which it is not possible to
establish an altered state of consciousness. This is simply a way of saying
that no outside source can be allowed to impinge on the sensori-motor
environment of the secure computer. There can be no eavesdroppers to detect and
perhaps modify sensitive or secret information, and there can be no
substitution or modification of the stored program that controls the
sensori-motor environment of the computer. At the present time, the only way to
effect any level of security within an unsecured, or dubious platform, is to
add yet another platform that is secure and hence trusted.
Given a trusted
platform, it is possible to establish a trusted communication channel with
another trusted platform across connections that are not, themselves,
intrinsically trusted. This can be accomplished through the use or
cryptographic processes on the trusted platforms. In the same way, two generals
communicate during the battle by sending each other secret messages that cannot
be understood by the enemy even if they are intercepted. This allows for creation
of a single trust environment within physically disjoint systems where the
space separating them is suspect. Thus, it is possible to separate the client
element of an application from the server element of an application and still
enable them to share a common trust environment, as long as each resides on a
trusted platform.
From a trusted
platform, it is also possible to measure selected characteristics of another
platform in order to imbue some level of trust in that platform. Further, it is
possible, again from a trusted platform, to detect changes of certain types and
levels in another platform in order to evaluate whether an ostensibly trusted
platform has been compromised or not.
Trusted
platforms derive their trust from their intrinsic characteristics that make
them difficult or impossible to manipulate or modify. The degree to which they are
impervious to such manipulation is termed their level of being tamper-resistant.
Today, essentially no computer platforms are inherently tamper-proof;
there are mechanisms to subvert virtually all known architectures, albeit at
various costs. So, given that no platform is immune from manipulation, a
necessary feature of trusted platforms is that of being tamper-evident. That
is, if a platform is manipulated, then it should exhibit one or more telltale
signs that it has been manipulated.
Finally, another
given relative to computer systems is that complexity in computer software
presents an enhanced threat environment. The problem is that, today, computer
programming is still a cognitive process of the human mind. At the highest
levels, it requires attention and action
|