seminal point, trust must then be recursively established through auditing
(visiting and inspecting) the facilities for fabrication, manufacturing,
provisioning and issuance of the secure core to the owner of the personal
electronic device. It typically also entails auditing of the software that is
loaded into the secure core. In the case of the more advanced transcendent
personal device, some modifications will be necessary in this process for it to
truly be an agent of the bearer.
While the owner of the transcendent personal
device will be fully in control, it will be impossible for all bearers of such
devices to go through the same type of audit procedures that large scale secure
core consumers go through. So, it will be necessary to fall back on
certification standards; standards such as Common Criteria for the overall
security level of the device, FIPS 140-2 for cryptographic operations, FIPS 201
for provisioning and ISO/IEC 24727 for operational standardization.
Additional standards will be required to
cover the facilities of software loaded into both the body and the trusted core
agent of the transcendent personal device. This is likely to be a more
heavyweight system than is currently found in personal electronic devices, so
it will require facilities for performing software updates while at the same
time assuring that the transcendent personal device is truly functioning as a
fiduciary agent on behalf of the bearer. Standards will be required to assure
that the software systems, in addition to functioning correctly from a
technical standpoint, also function correctly from a policy standpoint.
It is an interesting aspect of computer
systems that the movement to what we now know as the personal computer was
actually a move toward greater susceptibility to a variety of threats that were
far more difficult to exploit with earlier variants of computer systems.
Specifically, early architectures for mainframe computers circa 1968 made use
of multi-level memory management
systems and security oriented software architectures.
Multi-level memory management means that
access to memory by the processor occurs at multiple hierarchically layered
levels. Thus, a program accessed from the superior level is always able to
wrest control of the computer system from programs running in subordinate
levels of memory. Moreover, with such a system, it is possible for the superior
level of memory to require its own involvement in certain operations performed
by the inferior level of memory. This enables the superior memory to perform
continuous watchdog checks on software running out of the inferior memory
segment. The computer system is brought into an operational status by a process
termed booting or bootstrapping, and the first program brought
into operation is termed the kernel operating system. For subordinate
programs to be brought into operation, they are subject to constraints defined
by, and enforced by the kernel. When the first personal computer systems were
deployed, as a means to simplify both the hardware and software architectures
of the systems to make them less expensive, multi-layer memory management was
discarded. The result was that such computers were then open to a wide range of
attacks. Resorting to an anthropomorphic approach, such threats were given the
name of mechanisms closely associated with human physiology or social
structures: viruses, Trojan Horse, worms (internal parasites) and the like. The
whole set of threat software is sometimes termed malware.
Miniaturization is advancing to the point
where order of magnitude increases in the memory of secure cores are becoming
possible. Now, in order to enhance the trust infrastructure, the form of memory
must reflect more secure architecture possibilities; for example, multi-layer
memory can allow for highly privileged secure operating system components. This
offers the prospect that in dealing with ever more complex policy environments,
the trusted core agents will not be subject to
|