asks of God, “When the people ask, who has sent you? What shall
I tell them?” God then responds “I Am that I am. “Tell them that ‘I am’ has
sent you.” Now, within a computer security infrastructure, this is a classic
example of a self-signed assertion. I am who I say I am; you may accept and
trust that or not.
In computer
science, an example of a self-signed assertion is called a self-signed certificate.
A digital certificate is a credential through which a third party can attest to
the trustworthiness of two parties that want to enter into a transaction with
each other. If both parties individually trust the third party, then they
should be able to trust each other. This is the digital equivalent of the
Letter of Credence that ambassadors present to a head of state to establish
representation. The trusted third party signs one certificate that says “This
is Tom.” and then a second certificate that says, “This is Sally.” Now, the two
parties can exchange these certificates and, again, if they trust the signer of
the certificate, then Tom trusts that this is Sally and Sally trusts that this
is Tom when they meet up somewhere. The question then becomes, how does the
trusted third party introduce itself? It does so by signing its own certificate
that says, essentially, “I am” and then providing copies of this certificate to
both Tom and Sally. If the signature of the self-signed certificate matches
that of the signatures on the credentials (certificates) also given to Tom and
Sally, then they can each determine that all the certificates came from a party
they all trust.
Finally, we must
consider again the establishment as well as conveyance of trust that can be
accomplished through various degrees of force, ranging from fear for one’s
physical well being to cognitive intimidation induced by threats and coercion.
Trust established in any of these manners propagates directly into the
subordinate policy infrastructures in the form of implied or direct
consequences of interactions. In such situations, force or fear may be used to
shape the form and content of interactions in advance, or to judge the
acceptability of the form and content of interactions after the fact. In either
case, the fear of the consequences for non-adherence to the rules becomes a
guiding principle of the policy infrastructure.
Before
addressing the policy infrastructure, we need to emphasize that readers
familiar with the literature on trust may find our approach much broader than
they are used to. For example, we suggest that interested readers refer to
Julian Rotter’s seminal paper A new scale for the measurement of
interpersonal trust for a psychologist’s approach to the subject
encompassing many elements that will be found in this book. In particular,
Rotter studies the relationship of religion to trust in a quantitative manner,
the earliest, and as far as we know, the most formal attempt to address the
subject in an experimental environment. Obviously, we give trust a much larger
importance as we place the trust infrastructure at the center of development of
both individuals and groups, and humans and computers. We do not consider trust
as an epiphenomenon of human behavior, but rather a central mechanism of human
survival. This all shouldn’t be too much of a surprise, as the very lack of a
global approach to the concept of trust in both human and computer system in
the current literature has been a major driver in us writing this book.
The policy
infrastructure is that portion of the system that establishes and implements
policy at the transaction points of the system. Policy encompasses the
following concepts: (a) specification of the rules governing interactions in
general, and transactions specifically, (b) processes for negotiating, or
otherwise arriving at the rules for a transaction, (c) processes for the
application of rules during transactions, and (d) enforcement or consequences
of the rules applied during a transaction.
|