humans? Could we make progress here
at all? That’s when the idea of defining ontologies came along.
For the weapons
specialist, the planning specialist and the decision maker to work together,
they needed to agree on common definitions of the data they would exchange, and
how those data are articulated. For example, if the decision maker asks the
planning specialist on the firepower available, it must be expressed in
unequivocal terms. That will then lead the decision maker to ask the weapons
specialist to perform actions in a way that also must be without ambiguity. If
the decision maker says, “Fire on position 2!” after hearing about it from the
planner, there must be some common agreement on what “position 2” means. The
build-up of such a terminology, in all its intricacy, is what an ontology is.
So, we see that an ontology is a means for computers to share knowledge in a
consistent manner. One wonders of course, where does that barbaric term,
ontology, come from? Actually, the philosophers, since antiquity, have been
themselves puzzled by how humans share information in such an efficient manner,
even if some of the time it appears desperately inefficient. They identified
the need for ontologies to describe human knowledge, and so we see that the
human concept of ontology and the computer concept of ontology are in fact the
same thing, except that for the computer to understand humans, the ontology has
to be explicit to the smallest details, since the computer lacks the shared
sensori-motor experience of humans. The computer is not capable of filling in
the gaps in the description. The computer needs the full description logic.
Well, now we
have a hopefully well-formed theory of content. We know something about how to
specify knowledge, we know how to represent it in a computer, and we know how
to share it. We should now come back to what people, or agents, do with the
content. Clearly, humans do not limit themselves to description logic. They do
utter statements like “This sentence is false.” without falling down in
convulsions with their brains infinitely trying to figure out if the sentence
is indeed false, or true, or false, ad infinitum. Humans are just content to
say “This sentence is false.” and examine the peculiarities it involves, just
as they might examine a snail, or a story or anything else. This is just a
sentence, and we can think about it. So, what about computers then? Can they
limit themselves to a subset of logic, while hoping to do human-like tasks?
Clearly they cannot. Computers need to access the full power of logic. If
humans do not go into infinite loops when provided with uncertain data, there
we need to understand why and give that capability to computers, if we indeed
feel that it’s worth making computers smarter.
First, let’s add
to our content apparatus the rules of full logic that are needed for the
computer agent to act with full effect. What we will just say is that this is a
field still in development. There are several competing kinds of logic and it
would be too complex, and not really needed for this book, to go into the
details. We would just like to expand on one topic, that of learning. The
question is: “Now that we have content, how do we improve it? How do we learn?”
For example, let’s say that I know that John, Mary and Virginia, are siblings.
Let say we learn that there is a new child in the family; Joe. It’s easy enough
to add Joe to our description. Because none of the rules have changed, the
computer knows that Joe is the brother of John, Mary and Virginia. Everything
is fine. The computer can learn. But wait a minute, what happens if my initial
knowledge was wrong? What if my definition of brother was wrong? Of course,
this is not likely to happen, but let’s consider a more subtle example. Let’s
say that we are considering an ontology of ornithology. Sure enough, we’d have
in the ornithology ontology the fact that birds can fly. We would be happy
working with that in classifying the birds of our village. However, nature
|