sometimes being perverse, one day we are sure to meet a penguin. From the
movies, we know they can dance, but can they fly? Well, we now have to change
our ontology to say that not all birds fly. So, we have to retract an existing
assumption and replace it by a new one. Learning, as we can see, is not only
about adding knowledge. It’s also about changing or sometime even invalidating
previous knowledge. Now we are faced with a new situation; that of realizing
that our knowledge is subject to doubt. This leads us once again to trust.
Trust and Content
As we’ve seen,
any content needs to be associated with a level of trust. Whether it is human
or computer knowledge, there must be a mechanism to say whether that
information is understood such that certain actions can be undertaken with some
expectation of the outcome of the action. Alternatively, we need to understand
whether the information is not fully understood and that other actions are
needed with some alternate expectation of an outcome. Trust is the measure of
how well we understand the information such that we can expect a specific
outcome from an action with some degree of certainty. Of course, trust as we
have defined it is a gradient that ranges from no trust at all to complete
trust. Here, we need to make a point; complete trust is a religious concept.
At this point, we can come back to our famed
sentence “This sentence is false.” We were observing that humans can process
this sentence without difficulty whereas it is seemingly a big problem for a
computer. Now, imagine that just as humans have a mechanism to evaluate every
piece of knowledge and assign to it a level of trust, before processing,
computers would look at every piece of data they have in the same way. When
presented with “This sentence is false.” the computer would first evaluate its
chance of processing it. If it accepts that sentence blindly, it will go in an
unending spin. However, if it approaches it with caution, which is with less
than complete trust, it may recognize readily that it should be careful and
stop processing at once; in case of further uncertainty, it may decide to limit
the time it allocates to processing it. That’s a small illustration of the
concept of trust, and we’ll go into it more depth in the next chapter.
The emergence of
personal electronic devices centered on secure cores in the commercial
marketplace can be illustrated through a number of distinct case studies. In
each case, the reason for success or failure can typically be traced to two
characteristics of the situation. One characteristic is common across all cases
and one characteristic is similar, but just a bit different in each case. The
common characteristic is the fact that deployment incurs an infrastructure
problem that is generally best solved by a large scale system deployment
through which the personal electronic device infrastructure can be added at the
ground floor in the development of the full system. The similar, but just a
little bit unique characteristic is of course, money.
It is worth
noting that these success stories all have the common theme that they
illustrate personal electronic devices very much as an emergence species of
computer; specifically, taking their secure core as a starting point. None of
the stories center on the more significant features of complex social
ecosystems that we think are the future of personal electronic devices, or
whatever their descendent species offspring might be called. The cases are of
interest, however, because they at least give us a view of the concepts, albeit
operating at the technical edges of social orders.
The first large
scale deployment of a phone card based system was undertaken by France Telecom
during the early 1980’s. The overarching system of concern was the deployment
of pay telephones throughout France. The point of concern was the prospect
of fraud in the handling of large amounts of currency in the form of coins. The
fraud could take a variety of forms; fraud in the
|