This seemingly innocuous observation turns out to be the source of a reversal of trust. For example, if I
manage to make my personal electronic device interact seemingly successfully
with trusted institutions while an observer takes notes, I may convince the
observer that my personal electronic device can be trusted. I might then be
able to engage the observer into a fraudulent transaction. Well, “That’s way
too complicated for a computer, isn’t it?” Actually, it is not, and we have
today a spectacular example of such a reversal of trust with the Google search
engine. The way Google ranks its pages is such that its users can trust that
the most important pages are going to be presented, which means pages bearing
less risk of being disingenuous. For example, when asking about a bank, it
would be very disconcerting if the first page presented by Google were a fake
bank page, such as we see quite often referenced in spam e-mail, designed to
fraudulently capture usernames and passwords and to later use those on the real
bank page. Since Google constantly tunes its algorithms to avoid such
catastrophe, we imbue trust in it and typically consider that its top page
ranks are synonyms of quality. For those of us who download software, we
similarly trust the top pages not to offer suspect software for download. We do
that because we assume that the top download pages are used by multitudes, and
therefore any fraud would have a good chance to be reliably and quickly
detected.
Consequently,
Google is a source of trust. However, by knowing that, the potential for a
reversal of trust now exists. If a hacker could somehow reverse engineer the
algorithm used by Google to rank pages, then they could perhaps get their pages
on top of Google’s ranking; albeit, perhaps for only a short time. Users would
then go to these pages based on their trust in Google, but would be then at the
mercy of whatever scheme the page has in store for them, like getting their
banking information or downloading software with viruses. Obviously, search
engine facilities such as Google put great effort into thwarting such misuse.
However, it perhaps makes one wonder, what about search engines ranking their
query returns based on who has paid them the most money? So, we see that not
only can trust be abused, it can also be used for abuse. As the French say, “the
dress a monk makes.”
How does a
computer protect against trust reversal? At a rather mechanical level, the
typical way is to make sure that the institutions authenticate themselves in a
trusted fashion when a personal electronic device interacts with them. As we
have noted, there is only one safe way to do that. Institutional systems must
have a trusted core that enters into an exchange of information with the
trusted core of the personal electronic device, thus making sure that the two
parties are mutually trustworthy before engaging in any transaction. There is
no need to emphasize that we are a far cry from finding this situation
prevalent, and therefore trust, extension of trust, deception and reversal of trust
are here for the long term.
By examining in
detail the physical properties of secure cores, we’ve seen that physical
entities embody trust inasmuch as their production and functioning is trusted.
Further, we saw that when they are indeed trusted, that trust extends to
several functions of the entities. Following the methodology that we’ve adopted
for this book, we should at this point ask ourselves whether such trust
properties are only attached to physical embodiments, or whether they give us
potential insights into human concepts of trust. Remember, we started with
personal electronic devices and we have seen clearly that the role of the
trusted core is first to serve as a means to guarantee that the personal
electronic device does faithfully represent its owner. The personal electronic
device is indeed the representative of the owner in the digital network. Actually,
the personal electronic device has a body with a thinking part attached to a
sensori-motor system, together with
|