Chapter - LOGICAL AGENTS Notes | EduRev

: Chapter - LOGICAL AGENTS Notes | EduRev

 Page 1


7
LOGICAL AGENTS
In which we design agents that can form representations of the world, use a pro-
cess of inference to derive new representations about the world, and use these new
representations to deduce what to do.
This chapter introduces knowledge-based agents. The concepts that we discuss—the repre-
sentation of knowledge and the reasoning processes that bring knowledge to life—are central
to the entire ?eld of arti?cial intelligence.
Humans, it seems, know things and do reasoning. Knowledge and reasoning are also
important for arti?cial agents because they enable successful behaviors that would be very
hard to achieve otherwise. We have seen that knowledge of action outcomes enables problem-
solving agents to perform well in complex environments. A re?ex agents could only ?nd its
way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is,
however, very speci?c and in?exible. A chess program can calculate the legal moves of its
king, but does not know in any useful sense that no piece can be on two different squares
at the same time. Knowledge-based agents can bene?t from knowledge expressed in very
general forms, combining and recombining information to suit myriad purposes. Often, this
process can be quite far removed from the needs of the moment—as when a mathematician
proves a theorem or an astronomer calculates the earth’s life expectancy.
Knowledge and reasoning also play a crucial role in dealing with partially observable
environments. A knowledge-based agent can combine general knowledge with current per-
cepts to infer hidden aspects of the current state prior to selecting actions. For example, a
physician diagnoses a patient—that is, infers a disease state that is not directly observable—
prior to choosing a treatment. Some of the knowledge that the physician uses in the form of
rules learned from textbooks and teachers, and some is in the form of patterns of association
that the physician may not be able to consciously describe. If its inside the physician’s head,
it counts as knowledge.
Understanding natural language also requires inferring hidden state, namely, the inten-
tion of the speaker. When we hear, “John saw the diamond through the window and coveted
it,” we know “it” refers to the diamond and not the window—we reason, perhaps uncon-
sciously, with our knowledge of relative value. Similarly, when we hear, “John threw the
brick through the window and broke it,” we know “it” refers to the window. Reasoning allows
194
Page 2


7
LOGICAL AGENTS
In which we design agents that can form representations of the world, use a pro-
cess of inference to derive new representations about the world, and use these new
representations to deduce what to do.
This chapter introduces knowledge-based agents. The concepts that we discuss—the repre-
sentation of knowledge and the reasoning processes that bring knowledge to life—are central
to the entire ?eld of arti?cial intelligence.
Humans, it seems, know things and do reasoning. Knowledge and reasoning are also
important for arti?cial agents because they enable successful behaviors that would be very
hard to achieve otherwise. We have seen that knowledge of action outcomes enables problem-
solving agents to perform well in complex environments. A re?ex agents could only ?nd its
way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is,
however, very speci?c and in?exible. A chess program can calculate the legal moves of its
king, but does not know in any useful sense that no piece can be on two different squares
at the same time. Knowledge-based agents can bene?t from knowledge expressed in very
general forms, combining and recombining information to suit myriad purposes. Often, this
process can be quite far removed from the needs of the moment—as when a mathematician
proves a theorem or an astronomer calculates the earth’s life expectancy.
Knowledge and reasoning also play a crucial role in dealing with partially observable
environments. A knowledge-based agent can combine general knowledge with current per-
cepts to infer hidden aspects of the current state prior to selecting actions. For example, a
physician diagnoses a patient—that is, infers a disease state that is not directly observable—
prior to choosing a treatment. Some of the knowledge that the physician uses in the form of
rules learned from textbooks and teachers, and some is in the form of patterns of association
that the physician may not be able to consciously describe. If its inside the physician’s head,
it counts as knowledge.
Understanding natural language also requires inferring hidden state, namely, the inten-
tion of the speaker. When we hear, “John saw the diamond through the window and coveted
it,” we know “it” refers to the diamond and not the window—we reason, perhaps uncon-
sciously, with our knowledge of relative value. Similarly, when we hear, “John threw the
brick through the window and broke it,” we know “it” refers to the window. Reasoning allows
194
Section 7.1. Knowledge-Based Agents 195
us to cope with the virtually in?nite variety of utterances using a ?nite store of commonsense
knowledge. Problem-solving agents have dif?culty with this kind of ambiguity because their
representation of contingency problems is inherently exponential.
Our ?nal reason for studying knowledge-based agents is their ?exibility. They are able
to accept new tasks in the form of explicitly described goals, they can achieve competence
quickly by being told or learning new knowledge about the environment, and they can adapt
to changes in the environment by updating the relevant knowledge.
We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple
new environment, the wumpus world, and illustrates the operation of a knowledge-based
agent without going into any technical detail. Then, in Section 7.3, we explain the general
principles of logic. Logic will be the primary vehicle for representing knowledge throughout
Part III of the book. The knowledge of logical agents is always de?nite—each proposition is
either true or false in the world, although the agent may be agnostic about some propositions.
Logic has the pedagogical advantage of being simple example of a representation for
knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the
reasoning carried out by humans and other agents in partially observable environments de-
pends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well,
so in Part V we cover probability, which can. In Part VI and Part VII we cover many repre-
sentations, including some based on continuous mathematics such as mixtures of Gaussians,
neural networks, and other representations.
Section 7.4 of this chapter de?nes a simple logic called propositional logic. While
much less expressive than ?rst-order logic (Chapter 8), propositional logic serves to illustrate
all the basic concepts of logic. There is also a well-developed technology for reasoning in
propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines
the concept of logical agents with the technology of propositional logic to build some simple
agents for the wumpus world. Certain shortcomings in propositional logic are identi?ed,
motivating the development of more powerful logics in subsequent chapters.
7.1 KNOWLEDGE-BASED AGENTS
The central component of a knowledge-based agent is its knowledge base, or KB. Informally, KNOWLEDGE BASE
a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is SENTENCE
related but is not identical to the sentences of English and other natural languages.) Each sen-
tence is expressed in a language called a knowledge representation language and represents
KNOWLEDGE
REPRESENTATION
LANGUAGE
some assertion about the world.
There must be a way to add new sentences to the knowledge base and a way to query
what is known. The standard names for these tasks are TELL and ASK, respectively. Both
tasks may involve inference—that is, deriving new sentences from old. In logical agents, INFERENCE
LOGICAL AGENTS which are the main subject of study in this chapter, inference must obey the fundamental
requirement that when one ASKs a question of the knowledge base, the answer should follow
from what has been told (or rather, TELLed) to the knowledge base previously. Later in the
Page 3


7
LOGICAL AGENTS
In which we design agents that can form representations of the world, use a pro-
cess of inference to derive new representations about the world, and use these new
representations to deduce what to do.
This chapter introduces knowledge-based agents. The concepts that we discuss—the repre-
sentation of knowledge and the reasoning processes that bring knowledge to life—are central
to the entire ?eld of arti?cial intelligence.
Humans, it seems, know things and do reasoning. Knowledge and reasoning are also
important for arti?cial agents because they enable successful behaviors that would be very
hard to achieve otherwise. We have seen that knowledge of action outcomes enables problem-
solving agents to perform well in complex environments. A re?ex agents could only ?nd its
way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is,
however, very speci?c and in?exible. A chess program can calculate the legal moves of its
king, but does not know in any useful sense that no piece can be on two different squares
at the same time. Knowledge-based agents can bene?t from knowledge expressed in very
general forms, combining and recombining information to suit myriad purposes. Often, this
process can be quite far removed from the needs of the moment—as when a mathematician
proves a theorem or an astronomer calculates the earth’s life expectancy.
Knowledge and reasoning also play a crucial role in dealing with partially observable
environments. A knowledge-based agent can combine general knowledge with current per-
cepts to infer hidden aspects of the current state prior to selecting actions. For example, a
physician diagnoses a patient—that is, infers a disease state that is not directly observable—
prior to choosing a treatment. Some of the knowledge that the physician uses in the form of
rules learned from textbooks and teachers, and some is in the form of patterns of association
that the physician may not be able to consciously describe. If its inside the physician’s head,
it counts as knowledge.
Understanding natural language also requires inferring hidden state, namely, the inten-
tion of the speaker. When we hear, “John saw the diamond through the window and coveted
it,” we know “it” refers to the diamond and not the window—we reason, perhaps uncon-
sciously, with our knowledge of relative value. Similarly, when we hear, “John threw the
brick through the window and broke it,” we know “it” refers to the window. Reasoning allows
194
Section 7.1. Knowledge-Based Agents 195
us to cope with the virtually in?nite variety of utterances using a ?nite store of commonsense
knowledge. Problem-solving agents have dif?culty with this kind of ambiguity because their
representation of contingency problems is inherently exponential.
Our ?nal reason for studying knowledge-based agents is their ?exibility. They are able
to accept new tasks in the form of explicitly described goals, they can achieve competence
quickly by being told or learning new knowledge about the environment, and they can adapt
to changes in the environment by updating the relevant knowledge.
We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple
new environment, the wumpus world, and illustrates the operation of a knowledge-based
agent without going into any technical detail. Then, in Section 7.3, we explain the general
principles of logic. Logic will be the primary vehicle for representing knowledge throughout
Part III of the book. The knowledge of logical agents is always de?nite—each proposition is
either true or false in the world, although the agent may be agnostic about some propositions.
Logic has the pedagogical advantage of being simple example of a representation for
knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the
reasoning carried out by humans and other agents in partially observable environments de-
pends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well,
so in Part V we cover probability, which can. In Part VI and Part VII we cover many repre-
sentations, including some based on continuous mathematics such as mixtures of Gaussians,
neural networks, and other representations.
Section 7.4 of this chapter de?nes a simple logic called propositional logic. While
much less expressive than ?rst-order logic (Chapter 8), propositional logic serves to illustrate
all the basic concepts of logic. There is also a well-developed technology for reasoning in
propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines
the concept of logical agents with the technology of propositional logic to build some simple
agents for the wumpus world. Certain shortcomings in propositional logic are identi?ed,
motivating the development of more powerful logics in subsequent chapters.
7.1 KNOWLEDGE-BASED AGENTS
The central component of a knowledge-based agent is its knowledge base, or KB. Informally, KNOWLEDGE BASE
a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is SENTENCE
related but is not identical to the sentences of English and other natural languages.) Each sen-
tence is expressed in a language called a knowledge representation language and represents
KNOWLEDGE
REPRESENTATION
LANGUAGE
some assertion about the world.
There must be a way to add new sentences to the knowledge base and a way to query
what is known. The standard names for these tasks are TELL and ASK, respectively. Both
tasks may involve inference—that is, deriving new sentences from old. In logical agents, INFERENCE
LOGICAL AGENTS which are the main subject of study in this chapter, inference must obey the fundamental
requirement that when one ASKs a question of the knowledge base, the answer should follow
from what has been told (or rather, TELLed) to the knowledge base previously. Later in the
196 Chapter 7. Logical Agents
function KB-AGENT(percept) returns an action
static: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept,t))
action ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action,t))
t t + 1
return action
Figure 7.1 A generic knowledge-based agent.
chapter, we will be more precise about the crucial word “follow.” For now, take it to mean
that the inference process should not just make things up as it goes along.
Figure 7.1 shows the outline of a knowledge-based agent program. Like all our agents,
it takes a percept as input and returns an action. The agent maintains a knowledge base, KB,
which may initially contain some background knowledge. Each time the agent program is
BACKGROUND
KNOWLEDGE
called, it does two things. First, it TELLs the knowledge base what it perceives. Second,
it ASKs the knowledge base what action it should perform. In the process of answering
this query, extensive reasoning may be done about the current state of the world, about the
outcomes of possible action sequences, and so on. Once the action is chosen, the agent
records its choice with TELL and executes the action. The second TELL is necessary to let
the knowledge base know that the hypothetical action has actually been executed.
The details of the representation language are hidden inside two functions that imple-
ment the interface between the sensors and actuators and the core representation and reason-
ing system. MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence
asserting that the agent perceived the percept at the given time. MAKE-ACTION-QUERY
takes a time as input and returns a sentence that asks what action should be performed at
that time. The details of the inference mechanisms are hidden inside TELL and ASK. Later
sections will reveal these details.
The agent in Figure 7.1 appears quite similar to the agents with internal state described
in Chapter 2. Because of the de?nitions of TELL and ASK, however, the knowledge-based
agent is not an arbitrary program for calculating actions. It is amenable to a description at the
knowledge level, where we need specify only what the agent knows and what its goals are, KNOWLEDGE LEVEL
in order to ?x its behavior. For example, an automated taxi might have the goal of delivering
a passenger to Marin County and might know that it is in San Francisco and that the Golden
Gate Bridge is the only link between the two locations. Then we can expect it to cross the
Golden Gate Bridge because it knows that that will achieve its goal. Notice that this analysis
is independent of how the taxi works at the implementation level. It doesn’t matter whether
IMPLEMENTATION
LEVEL
its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons
by manipulating strings of symbols stored in registers or by propagating noisy signals in a
network of neurons.
Page 4


7
LOGICAL AGENTS
In which we design agents that can form representations of the world, use a pro-
cess of inference to derive new representations about the world, and use these new
representations to deduce what to do.
This chapter introduces knowledge-based agents. The concepts that we discuss—the repre-
sentation of knowledge and the reasoning processes that bring knowledge to life—are central
to the entire ?eld of arti?cial intelligence.
Humans, it seems, know things and do reasoning. Knowledge and reasoning are also
important for arti?cial agents because they enable successful behaviors that would be very
hard to achieve otherwise. We have seen that knowledge of action outcomes enables problem-
solving agents to perform well in complex environments. A re?ex agents could only ?nd its
way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is,
however, very speci?c and in?exible. A chess program can calculate the legal moves of its
king, but does not know in any useful sense that no piece can be on two different squares
at the same time. Knowledge-based agents can bene?t from knowledge expressed in very
general forms, combining and recombining information to suit myriad purposes. Often, this
process can be quite far removed from the needs of the moment—as when a mathematician
proves a theorem or an astronomer calculates the earth’s life expectancy.
Knowledge and reasoning also play a crucial role in dealing with partially observable
environments. A knowledge-based agent can combine general knowledge with current per-
cepts to infer hidden aspects of the current state prior to selecting actions. For example, a
physician diagnoses a patient—that is, infers a disease state that is not directly observable—
prior to choosing a treatment. Some of the knowledge that the physician uses in the form of
rules learned from textbooks and teachers, and some is in the form of patterns of association
that the physician may not be able to consciously describe. If its inside the physician’s head,
it counts as knowledge.
Understanding natural language also requires inferring hidden state, namely, the inten-
tion of the speaker. When we hear, “John saw the diamond through the window and coveted
it,” we know “it” refers to the diamond and not the window—we reason, perhaps uncon-
sciously, with our knowledge of relative value. Similarly, when we hear, “John threw the
brick through the window and broke it,” we know “it” refers to the window. Reasoning allows
194
Section 7.1. Knowledge-Based Agents 195
us to cope with the virtually in?nite variety of utterances using a ?nite store of commonsense
knowledge. Problem-solving agents have dif?culty with this kind of ambiguity because their
representation of contingency problems is inherently exponential.
Our ?nal reason for studying knowledge-based agents is their ?exibility. They are able
to accept new tasks in the form of explicitly described goals, they can achieve competence
quickly by being told or learning new knowledge about the environment, and they can adapt
to changes in the environment by updating the relevant knowledge.
We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple
new environment, the wumpus world, and illustrates the operation of a knowledge-based
agent without going into any technical detail. Then, in Section 7.3, we explain the general
principles of logic. Logic will be the primary vehicle for representing knowledge throughout
Part III of the book. The knowledge of logical agents is always de?nite—each proposition is
either true or false in the world, although the agent may be agnostic about some propositions.
Logic has the pedagogical advantage of being simple example of a representation for
knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the
reasoning carried out by humans and other agents in partially observable environments de-
pends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well,
so in Part V we cover probability, which can. In Part VI and Part VII we cover many repre-
sentations, including some based on continuous mathematics such as mixtures of Gaussians,
neural networks, and other representations.
Section 7.4 of this chapter de?nes a simple logic called propositional logic. While
much less expressive than ?rst-order logic (Chapter 8), propositional logic serves to illustrate
all the basic concepts of logic. There is also a well-developed technology for reasoning in
propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines
the concept of logical agents with the technology of propositional logic to build some simple
agents for the wumpus world. Certain shortcomings in propositional logic are identi?ed,
motivating the development of more powerful logics in subsequent chapters.
7.1 KNOWLEDGE-BASED AGENTS
The central component of a knowledge-based agent is its knowledge base, or KB. Informally, KNOWLEDGE BASE
a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is SENTENCE
related but is not identical to the sentences of English and other natural languages.) Each sen-
tence is expressed in a language called a knowledge representation language and represents
KNOWLEDGE
REPRESENTATION
LANGUAGE
some assertion about the world.
There must be a way to add new sentences to the knowledge base and a way to query
what is known. The standard names for these tasks are TELL and ASK, respectively. Both
tasks may involve inference—that is, deriving new sentences from old. In logical agents, INFERENCE
LOGICAL AGENTS which are the main subject of study in this chapter, inference must obey the fundamental
requirement that when one ASKs a question of the knowledge base, the answer should follow
from what has been told (or rather, TELLed) to the knowledge base previously. Later in the
196 Chapter 7. Logical Agents
function KB-AGENT(percept) returns an action
static: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept,t))
action ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action,t))
t t + 1
return action
Figure 7.1 A generic knowledge-based agent.
chapter, we will be more precise about the crucial word “follow.” For now, take it to mean
that the inference process should not just make things up as it goes along.
Figure 7.1 shows the outline of a knowledge-based agent program. Like all our agents,
it takes a percept as input and returns an action. The agent maintains a knowledge base, KB,
which may initially contain some background knowledge. Each time the agent program is
BACKGROUND
KNOWLEDGE
called, it does two things. First, it TELLs the knowledge base what it perceives. Second,
it ASKs the knowledge base what action it should perform. In the process of answering
this query, extensive reasoning may be done about the current state of the world, about the
outcomes of possible action sequences, and so on. Once the action is chosen, the agent
records its choice with TELL and executes the action. The second TELL is necessary to let
the knowledge base know that the hypothetical action has actually been executed.
The details of the representation language are hidden inside two functions that imple-
ment the interface between the sensors and actuators and the core representation and reason-
ing system. MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence
asserting that the agent perceived the percept at the given time. MAKE-ACTION-QUERY
takes a time as input and returns a sentence that asks what action should be performed at
that time. The details of the inference mechanisms are hidden inside TELL and ASK. Later
sections will reveal these details.
The agent in Figure 7.1 appears quite similar to the agents with internal state described
in Chapter 2. Because of the de?nitions of TELL and ASK, however, the knowledge-based
agent is not an arbitrary program for calculating actions. It is amenable to a description at the
knowledge level, where we need specify only what the agent knows and what its goals are, KNOWLEDGE LEVEL
in order to ?x its behavior. For example, an automated taxi might have the goal of delivering
a passenger to Marin County and might know that it is in San Francisco and that the Golden
Gate Bridge is the only link between the two locations. Then we can expect it to cross the
Golden Gate Bridge because it knows that that will achieve its goal. Notice that this analysis
is independent of how the taxi works at the implementation level. It doesn’t matter whether
IMPLEMENTATION
LEVEL
its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons
by manipulating strings of symbols stored in registers or by propagating noisy signals in a
network of neurons.
Section 7.2. The Wumpus World 197
As we mentioned in the introduction to the chapter, one can build a knowledge-based
agent simply by TELLing it what it needs to know. The agent’s initial program, before
it starts to receive percepts, is built by adding one by one the sentences that represent the
designer’s knowledge of the environment. Designing the representation language to make it
easy to express this knowledge in the form of sentences simpli?es the construction problem
enormously. This is called the declarative approach to system building. In contrast, the DECLARATIVE
procedural approach encodes desired behaviors directly as program code; minimizing the
role of explicit representation and reasoning can result in a much more ef?cient system. We
will see agents of both kinds in Section 7.7. In the 1970s and 1980s, advocates of the two
approaches engaged in heated debates. We now understand that a successful agent must
combine both declarative and procedural elements in its design.
In addition to TELLing it what it needs to know, we can provide a knowledge-based
agent with mechanisms that allow it to learn for itself. These mechanisms, which are dis-
cussed in Chapter 18, create general knowledge about the environment out of a series of
percepts. This knowledge can be incorporated into the agent’s knowledge base and used for
decision making. In this way, the agent can be fully autonomous.
All these capabilities—representation, reasoning, and learning—rest on the centuries-
long development of the theory and technology of logic. Before explaining that theory and
technology, however, we will create a simple world with which to illustrate them.
7.2 THE WUMPUS WORLD
The wumpus world is a cave consisting of rooms connected by passageways. Lurking some- WUMPUS WORLD
where in the cave is the wumpus, a beast that eats anyone who enters its room. The wumpus
can be shot by an agent, but the agent has only one arrow. Some rooms contain bottomless
pits that will trap anyone who wanders into these rooms (except for the wumpus, which is
too big to fall in). The only mitigating feature of living in this environment is the possibility
of ?nding a heap of gold. Although the wumpus world is rather tame by modern computer
game standards, it makes an excellent testbed environment for intelligent agents. Michael
Genesereth was the ?rst to suggest this.
A sample wumpus world is shown in Figure 7.2. The precise de?nition of the task
environment is given, as suggested in Chapter 2, by the PEAS description:
} Performance measure: +1000 for picking up the gold, –1000 for falling into a pit or
being eaten by the wumpus, –1 for each action taken and –10 for using up the arrow.
} Environment: A 44 grid of rooms. The agent always starts in the square labeled
[1,1], facing to the right. The locations of the gold and the wumpus are chosen ran-
domly, with a uniform distribution, from the squares other than the start square. In
addition, each square other than the start can be a pit, with probability 0.2.
} Actuators: The agent can move forward, turn left by 90

, or turn right by 90

. The
agent dies a miserable death if it enters a square containing a pit or a live wumpus. (It
is safe, albeit smelly, to enter a square with a dead wumpus.) Moving forward has no
Page 5


7
LOGICAL AGENTS
In which we design agents that can form representations of the world, use a pro-
cess of inference to derive new representations about the world, and use these new
representations to deduce what to do.
This chapter introduces knowledge-based agents. The concepts that we discuss—the repre-
sentation of knowledge and the reasoning processes that bring knowledge to life—are central
to the entire ?eld of arti?cial intelligence.
Humans, it seems, know things and do reasoning. Knowledge and reasoning are also
important for arti?cial agents because they enable successful behaviors that would be very
hard to achieve otherwise. We have seen that knowledge of action outcomes enables problem-
solving agents to perform well in complex environments. A re?ex agents could only ?nd its
way from Arad to Bucharest by dumb luck. The knowledge of problem-solving agents is,
however, very speci?c and in?exible. A chess program can calculate the legal moves of its
king, but does not know in any useful sense that no piece can be on two different squares
at the same time. Knowledge-based agents can bene?t from knowledge expressed in very
general forms, combining and recombining information to suit myriad purposes. Often, this
process can be quite far removed from the needs of the moment—as when a mathematician
proves a theorem or an astronomer calculates the earth’s life expectancy.
Knowledge and reasoning also play a crucial role in dealing with partially observable
environments. A knowledge-based agent can combine general knowledge with current per-
cepts to infer hidden aspects of the current state prior to selecting actions. For example, a
physician diagnoses a patient—that is, infers a disease state that is not directly observable—
prior to choosing a treatment. Some of the knowledge that the physician uses in the form of
rules learned from textbooks and teachers, and some is in the form of patterns of association
that the physician may not be able to consciously describe. If its inside the physician’s head,
it counts as knowledge.
Understanding natural language also requires inferring hidden state, namely, the inten-
tion of the speaker. When we hear, “John saw the diamond through the window and coveted
it,” we know “it” refers to the diamond and not the window—we reason, perhaps uncon-
sciously, with our knowledge of relative value. Similarly, when we hear, “John threw the
brick through the window and broke it,” we know “it” refers to the window. Reasoning allows
194
Section 7.1. Knowledge-Based Agents 195
us to cope with the virtually in?nite variety of utterances using a ?nite store of commonsense
knowledge. Problem-solving agents have dif?culty with this kind of ambiguity because their
representation of contingency problems is inherently exponential.
Our ?nal reason for studying knowledge-based agents is their ?exibility. They are able
to accept new tasks in the form of explicitly described goals, they can achieve competence
quickly by being told or learning new knowledge about the environment, and they can adapt
to changes in the environment by updating the relevant knowledge.
We begin in Section 7.1 with the overall agent design. Section 7.2 introduces a simple
new environment, the wumpus world, and illustrates the operation of a knowledge-based
agent without going into any technical detail. Then, in Section 7.3, we explain the general
principles of logic. Logic will be the primary vehicle for representing knowledge throughout
Part III of the book. The knowledge of logical agents is always de?nite—each proposition is
either true or false in the world, although the agent may be agnostic about some propositions.
Logic has the pedagogical advantage of being simple example of a representation for
knowledge-based agents, but logic has some severe limitations. Clearly, a large portion of the
reasoning carried out by humans and other agents in partially observable environments de-
pends on handling knowledge that is uncertain. Logic cannot represent this uncertainty well,
so in Part V we cover probability, which can. In Part VI and Part VII we cover many repre-
sentations, including some based on continuous mathematics such as mixtures of Gaussians,
neural networks, and other representations.
Section 7.4 of this chapter de?nes a simple logic called propositional logic. While
much less expressive than ?rst-order logic (Chapter 8), propositional logic serves to illustrate
all the basic concepts of logic. There is also a well-developed technology for reasoning in
propositional logic, which we describe in sections 7.5 and 7.6. Finally, Section 7.7 combines
the concept of logical agents with the technology of propositional logic to build some simple
agents for the wumpus world. Certain shortcomings in propositional logic are identi?ed,
motivating the development of more powerful logics in subsequent chapters.
7.1 KNOWLEDGE-BASED AGENTS
The central component of a knowledge-based agent is its knowledge base, or KB. Informally, KNOWLEDGE BASE
a knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is SENTENCE
related but is not identical to the sentences of English and other natural languages.) Each sen-
tence is expressed in a language called a knowledge representation language and represents
KNOWLEDGE
REPRESENTATION
LANGUAGE
some assertion about the world.
There must be a way to add new sentences to the knowledge base and a way to query
what is known. The standard names for these tasks are TELL and ASK, respectively. Both
tasks may involve inference—that is, deriving new sentences from old. In logical agents, INFERENCE
LOGICAL AGENTS which are the main subject of study in this chapter, inference must obey the fundamental
requirement that when one ASKs a question of the knowledge base, the answer should follow
from what has been told (or rather, TELLed) to the knowledge base previously. Later in the
196 Chapter 7. Logical Agents
function KB-AGENT(percept) returns an action
static: KB, a knowledge base
t, a counter, initially 0, indicating time
TELL(KB, MAKE-PERCEPT-SENTENCE(percept,t))
action ASK(KB, MAKE-ACTION-QUERY(t))
TELL(KB, MAKE-ACTION-SENTENCE(action,t))
t t + 1
return action
Figure 7.1 A generic knowledge-based agent.
chapter, we will be more precise about the crucial word “follow.” For now, take it to mean
that the inference process should not just make things up as it goes along.
Figure 7.1 shows the outline of a knowledge-based agent program. Like all our agents,
it takes a percept as input and returns an action. The agent maintains a knowledge base, KB,
which may initially contain some background knowledge. Each time the agent program is
BACKGROUND
KNOWLEDGE
called, it does two things. First, it TELLs the knowledge base what it perceives. Second,
it ASKs the knowledge base what action it should perform. In the process of answering
this query, extensive reasoning may be done about the current state of the world, about the
outcomes of possible action sequences, and so on. Once the action is chosen, the agent
records its choice with TELL and executes the action. The second TELL is necessary to let
the knowledge base know that the hypothetical action has actually been executed.
The details of the representation language are hidden inside two functions that imple-
ment the interface between the sensors and actuators and the core representation and reason-
ing system. MAKE-PERCEPT-SENTENCE takes a percept and a time and returns a sentence
asserting that the agent perceived the percept at the given time. MAKE-ACTION-QUERY
takes a time as input and returns a sentence that asks what action should be performed at
that time. The details of the inference mechanisms are hidden inside TELL and ASK. Later
sections will reveal these details.
The agent in Figure 7.1 appears quite similar to the agents with internal state described
in Chapter 2. Because of the de?nitions of TELL and ASK, however, the knowledge-based
agent is not an arbitrary program for calculating actions. It is amenable to a description at the
knowledge level, where we need specify only what the agent knows and what its goals are, KNOWLEDGE LEVEL
in order to ?x its behavior. For example, an automated taxi might have the goal of delivering
a passenger to Marin County and might know that it is in San Francisco and that the Golden
Gate Bridge is the only link between the two locations. Then we can expect it to cross the
Golden Gate Bridge because it knows that that will achieve its goal. Notice that this analysis
is independent of how the taxi works at the implementation level. It doesn’t matter whether
IMPLEMENTATION
LEVEL
its geographical knowledge is implemented as linked lists or pixel maps, or whether it reasons
by manipulating strings of symbols stored in registers or by propagating noisy signals in a
network of neurons.
Section 7.2. The Wumpus World 197
As we mentioned in the introduction to the chapter, one can build a knowledge-based
agent simply by TELLing it what it needs to know. The agent’s initial program, before
it starts to receive percepts, is built by adding one by one the sentences that represent the
designer’s knowledge of the environment. Designing the representation language to make it
easy to express this knowledge in the form of sentences simpli?es the construction problem
enormously. This is called the declarative approach to system building. In contrast, the DECLARATIVE
procedural approach encodes desired behaviors directly as program code; minimizing the
role of explicit representation and reasoning can result in a much more ef?cient system. We
will see agents of both kinds in Section 7.7. In the 1970s and 1980s, advocates of the two
approaches engaged in heated debates. We now understand that a successful agent must
combine both declarative and procedural elements in its design.
In addition to TELLing it what it needs to know, we can provide a knowledge-based
agent with mechanisms that allow it to learn for itself. These mechanisms, which are dis-
cussed in Chapter 18, create general knowledge about the environment out of a series of
percepts. This knowledge can be incorporated into the agent’s knowledge base and used for
decision making. In this way, the agent can be fully autonomous.
All these capabilities—representation, reasoning, and learning—rest on the centuries-
long development of the theory and technology of logic. Before explaining that theory and
technology, however, we will create a simple world with which to illustrate them.
7.2 THE WUMPUS WORLD
The wumpus world is a cave consisting of rooms connected by passageways. Lurking some- WUMPUS WORLD
where in the cave is the wumpus, a beast that eats anyone who enters its room. The wumpus
can be shot by an agent, but the agent has only one arrow. Some rooms contain bottomless
pits that will trap anyone who wanders into these rooms (except for the wumpus, which is
too big to fall in). The only mitigating feature of living in this environment is the possibility
of ?nding a heap of gold. Although the wumpus world is rather tame by modern computer
game standards, it makes an excellent testbed environment for intelligent agents. Michael
Genesereth was the ?rst to suggest this.
A sample wumpus world is shown in Figure 7.2. The precise de?nition of the task
environment is given, as suggested in Chapter 2, by the PEAS description:
} Performance measure: +1000 for picking up the gold, –1000 for falling into a pit or
being eaten by the wumpus, –1 for each action taken and –10 for using up the arrow.
} Environment: A 44 grid of rooms. The agent always starts in the square labeled
[1,1], facing to the right. The locations of the gold and the wumpus are chosen ran-
domly, with a uniform distribution, from the squares other than the start square. In
addition, each square other than the start can be a pit, with probability 0.2.
} Actuators: The agent can move forward, turn left by 90

, or turn right by 90

. The
agent dies a miserable death if it enters a square containing a pit or a live wumpus. (It
is safe, albeit smelly, to enter a square with a dead wumpus.) Moving forward has no
198 Chapter 7. Logical Agents
effect if there is a wall in front of the agent. The action Grab can be used to pick up an
object that is in the same square as the agent. The action Shoot can be used to ?re an
arrow in a straight line in the direction the agent is facing. The arrow continues until it
either hits (and hence kills) the wumpus or hits a wall. The agent only has one arrow,
so only the ?rst Shoot action has any effect.
} Sensors: The agent has ?ve sensors, each of which gives a single bit of information:
– In the square containing the wumpus and in the directly (not diagonally) adjacent
squares the agent will perceive a stench.
– In the squares directly adjacent to a pit, the agent will perceive a breeze.
– In the square where the gold is, the agent will perceive a glitter.
– When an agent walks into a wall, it will perceive a bump.
– When the wumpus is killed, it emits a woeful scream that can be perceived any-
where in the cave.
The percepts will be given to the agent in the form of a list of ?ve symbols; for example,
if there is a stench and a breeze, but no glitter, bump, or scream, the agent will receive
the percept [Stench;Breeze;None;None;None].
Exercise 7.1 asks you to de?ne the wumpus environment along the various dimensions given
in Chapter 2. The principal dif?culty for the agent is its initial ignorance of the con?guration
of the environment; overcoming this ignorance seems to require logical reasoning. In most
instances of the wumpus world, it is possible for the agent to retrieve the gold safely. Occa-
sionally, the agent must choose between going home empty-handed and risking death to ?nd
the gold. About 21% of the environments are utterly unfair, because the gold is in a pit or
surrounded by pits.
Let us watch a knowledge-based wumpus agent exploring the environment shown in
Figure 7.2. The agent’s initial knowledge base contains the rules of the environment, as listed
PIT
1 2 3 4
1
2
3
4
START
Stench
Stench
Breez e
Gold
PIT
PIT
Breez e
Breez e
Breez e
Breez e
Breez e
Stench
Figure 7.2 A typical wumpus world. The agent is in the bottom left corner.
Read More
Offer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!