Introduction

2 - Knowledge
Representation and
Inference
Epistemology
Introduction
Philosophic
studies
Aristotle
Plato
Kant
Locke
A priori
knowledge
(universally
true and
can't be
denied)
A posteriori
knowledge
(truth or falsity
of the
knowledge
verifible by the
senses)
• What is knowledge? Try to define it.
– The study of knowledge is a called epistemology.
• Knowledge can be further defined as
– Procedural knowledge
• Knowing how to do something (eg knowing how to start a car.)
– Declarative knowledge
• Knowing that something is true or false
– Tacit knowledge
– eg I know how to put the key in the ignition but do I know how my
hand/arm/muscles/tendons/nerves all corordinated?
Paul Dunne GMIT
Artificial Intelligence
1
2 - Knowledge
Representation and
Inference
Introduction
• AI suggests that intelligent behaviour can be
achieved through the manipulation of symbol
structures
Fire-Engine
Symbols
Red
Symbol
Structure
Red (Fire-Engine)
• You could use this knowledge to infer new facts.
– Galway has a new fire-engine.
– It must be red!
• We represents facts (on the computer) using data
structures and we write program code to reason
with them.
Paul Dunne GMIT
Artificial Intelligence
2
2 - Knowledge
Representation and
Inference
Knowledge Representation Languages
• High level knowledge representation
languages have been developed to make
knowledge representation easier
• We’ll look at a few of these
–
–
–
–
Semantic nets
Frames
Predicate logic
Rules
Paul Dunne GMIT
Artificial Intelligence
3
2 - Knowledge
Representation and
Inference
First though…
Requirements for a knowledge representation language
• It must allow you represent adequately complex
facts in a clear and precise yet natural way, and
in a way that easily allows you to deduce new
facts from your existing knowledge.
– adequately complex facts
• Known as - Representational adequacy
– A level of detail that will produce the results we
require and no more
» if a simple representation language will be
adequate then use a simple language!
– dislikes (everybody, sprouts)
Paul Dunne GMIT
Artificial Intelligence
4
Requirements for Knowledge
Representation Languages
2 - Knowledge
Representation and
Inference
– clear and precise
• Known as - Well defined syntax and semantics
– no room for confusion and no ambiguities.
– Syntax defines the allowable structures which describe how to
make sentences
» “Dislikes – everybody – sprouts” is not allowed
– Semantics defines what it means
» “dislikes (everybody, sprouts)” means everybody dislikes
sprouts not sprouts dislike everybody.
– Natural
• Language must not be overly complex and difficult to
interpret.
– deduce new facts
• Known as - Inferential adequacy
– The language must support inference.
Paul Dunne GMIT
Artificial Intelligence
5
2 - Knowledge
Representation and
Inference
The main contenders
• Broadly speaking there are three main approaches to
knowledge representation in AI.
– SEMANTIC NETS/FRAMES
• Easy to use and relatively natural.
– LOGIC ∀X (bird ( X ) ⇒ files( X ))
• Arguably the most important. With a well defined syntax and
semantics. Allows inference but can be difficult to translate the
real world into logic. Has difficulties with time, uncertainty and
beliefs.
– RULES (If fire Then shout help)
• Condition-action rules or Production Rules(specifying what to do
under certain conditions) within a rule-based system.
Paul Dunne GMIT
Artificial Intelligence
6
2 - Knowledge
Representation and
Inference
Semantic Nets/Frames
Paul Dunne GMIT
Artificial Intelligence
7
2 - Knowledge
Representation and
Inference
Semantic Networks
A simple class hierarchy
•
•
•
•
Allows you to represent
classes (or categories) of
A concept
class
animal
objects and relationship
A relation
between objects and to draw
between concepts
simple inferences based on
subclass subclass
instance
this knowledge.
reptile
mammal
The net is made up of nodes
has-part
head
and arcs/links connecting the
subclass
nodes.
The links express
elephant
relationships. Without
colour
grey
large
size
relationships knowledge is
simply a collection of
instance
instance
unrelated facts. – with them
other knowledge can be
clyde
nellie
likes
apples
inferred. (nellie has a head!)
The nodes express physical
objects, concepts or situations.
Conclude: Clyde and Nellie both have heads
Paul Dunne GMIT
Artificial Intelligence
8
2 - Knowledge
Representation and
Inference
Semantic Networks
• Originally used to represent the meaning of English words
• The links represent the relationships .
– The most important relations are;
• known as a A-KIND-OF (AKO) relationship
• known as a IS-A relationship
– Other relations also allowed
• To represent properties of objects and categories of objects.
• Semantic networks normally allow efficient inheritancebased inferences using special purpose algorithms.
• Semantic nets in maths are labelled, directed graphs.
• Sometimes known as associative nets (i.e. nodes are
associated or related to other nodes)
Paul Dunne GMIT
Artificial Intelligence
9
2 - Knowledge
Representation and
Inference
Semantic Networks
• Have a Go ! --- Class Problems
– Represent each of the following useful pieces
of knowledge as a semantic net.
• (a) “Floyd is a small hippo who lives in Dublin zoo.
Like all hippos he eats grass and likes swimming”
• (b) “The aorta is a particular kind of artery which
has a diameter of 2.5cm. An artery is a kind of blood
vessel. An artery always has a muscular wall, and
generally has a diameter of 0.4cm. A vein is a kind
of blood vessel, but has a fibrous wall. Blood
vessels all have tubular form and contain blood.”
Paul Dunne GMIT
Artificial Intelligence
10
2 - Knowledge
Representation and
Inference
eats
(a)
HIPPO
grass
likes
swimming
IS-A
FLOYD
size
lives
small
(b)
Dublin zoo
Tubular
Form
Blood Vessel
Contain
Blood
AKO
AKO
Artery
diameter
Vein
Has-A
Muscular Wall
Has-A
IS-A
Aorta
0.4cm
Fiborous W all
diameter
2.5cm
Paul Dunne GMIT
Artificial Intelligence
11
2 - Knowledge
Representation and
Inference
Semantic Networks
• To try and properly define the semantics of a semantic network (what
it means) set theory is often employed.
• Semantic networks allows us to represent knowledge about objects and
relationships between objects in an intuitive way.
– However the sort of inferences that can be supported is fairly restrictive
(just inheritance of properties).
– Also the lack of any standards for link names is problematic.
– A semantic net cannot define knowledge as well as logic can.
– There is no way to encode heuristic knowledge (rules of thumb) into
semantic nets.
• It remains a good choice as a knowledge representation language for
some AI problems – particularly for showing binary relationships.
Paul Dunne GMIT
Artificial Intelligence
12
2 - Knowledge
Representation and
Inference
Frames
• Frames are a variant of semantic networks
– popular way to represent facts in an expert system.
• The difference is basically in the level of detail of a node.
– In semantic nets the node has a name.
• Properties of a node are shown using other nodes and some
sort of relationship linking them.
– In a frame the node itself has a structure.
• This means that the node can contain values or indeed other
frames.
• Semantic nets (generally) represent knowledge about a broad area.
• Frames represent (related) knowledge about a narrow subject
• A frame would be a good choice for describing something such as a
car – a computer etc.
Paul Dunne GMIT
Artificial Intelligence
13
2 - Knowledge
Representation and
Inference
Frames
• A frame is basically a group of slots (sometimes called properties) and
slot values or fillers) that define a stereotypical object.
3 Frames
Property
Mammal:
subclass:
has_part:
Animal
head
Object
Elephant
Property and Slot
terminology is
interchangeable
subclass:
Mammal
colour:
grey
size:
large
A slot value may also
be a frame
Slot value
Nellie
Slot
instance:
Elephant
likes:
apples
Infer Using Inheritance
Nellie is a large and grey, likes apples
and has a head.
Paul Dunne GMIT
Artificial Intelligence
14
2 - Knowledge
Representation and
Inference
Frames
• Some of the terminology from frames has
been adopted by Oject Orientated
technology.
• It is straight forward to translate between
semantic networks and frame based
representaion.
– Class and Instance Nodes ---- Objects
– Links
---- Slots
– Node at end of Link
--- Slot value
Paul Dunne GMIT
Artificial Intelligence
15
2 - Knowledge
Representation and
Inference
Semantic Net and Frame
Mammal:
subclass:
has_part:
animal
subclass subclass
reptile
Elephant
mammal
has-part
head
subclass
elephant
large
size
instance
clyde
Paul Dunne GMIT
Animal
head
subclass:
Mammal
colour:
grey
size:
large
Nellie
colour
grey
instance:
Elephant
likes:
apples
instance
nellie
likes
apples
Artificial Intelligence
16
2 - Knowledge
Representation and
Inference
Defaults and Multiple Inheritance
•Objects inherit the default property
value unless they have an individual
property value that conflicts with the
inherited one.
•Multiple inheritance is difficult and
often results in conflicts.
•Nellie could be a circus-animal
who has been specially
“engineered” to be white. Nellie
then is a subclass of Elephant and
Circus-animal.
Mammal:
subclass:
has_part:
*furry:
Elephant
subclass:
Animal
head
yes
Mammal
colour:
grey
size:
*furry:
large
no
Nellie
likes:
instance:
Default
value can
be overridden
apples
Elephant
•Colour? The value for Elephant
or for Circus-animal?
•The system must provide for these conflicts – such that
it will return the right value for colour (white) and the
right value for size (large).
Paul Dunne GMIT
Artificial Intelligence
17
2 - Knowledge
Representation and
Inference
Practical Frames
• The slots and slot values can be frames.
• The slot values could also be procedures, executed when
the value for the slot is required.
– [The system is then described as having a procedural rather that
decalarative semantics]
• Implementation of a simple frame system could be carried
out with the following algorithm [Note the recursion];
– What is the slot value for object “O’s” slot P.
• value(O, P)
– If slot name P exits for object O then
» Return slot value of P
– Else if object O has slot called “subclass” or “instance”.
» Then return “Super”, the value of this slot
– Find value(Super, P) and return this value.
– Else Fail.
Paul Dunne GMIT
Artificial Intelligence
18
2 - Knowledge
Representation and
Inference
Frames and Semantic Nets Summary
• Frames and semantic networks provide a
clear and simple way of representing
properties of objects and categories of
objects. A basic inference is available
through inheritance.
• It does not cope will with
– Negation ( not A)
– Disjunction (A or B)
– Quantification (for all A and some B)
Paul Dunne GMIT
Artificial Intelligence
19
2 - Knowledge
Representation and
Inference
Frames
• Go on try it.
– Represent the following as frames;
• (a) “Hippos live in Africa. Hippos are generally
quite large. Floyd is a small hippo who lives in
Dublin zoo. Like all hippos he eats grass and likes
swimming”
• (b) “The aorta is a particular kind of artery which
has a diameter of 2.5cm. An artery is a kind of blood
vessel. An artery always has a muscular wall, and
generally has a diameter of 0.4cm. A vein is a kind
of blood vessel, but has a fibrous wall. Blood
vessels all have tubular form and contain blood.”
Paul Dunne GMIT
Artificial Intelligence
20
2 - Knowledge
Representation and
Inference
Frames
(b)
Blood-vessel
form tubular
(a)
contains blood
Hippo
likes swimming
Artery
Aorta
eats
ako Blood-vessel
is-a Artery
*lives Africa
wall muscular
Diameter 2.5cm
*size large
*Diameter 0.4cm
grass
Floyd
Vein
is-a Hippo
Ako Blood-vessel
lives Dublin Zoo
Wall
fiborous
size small
Paul Dunne GMIT
Artificial Intelligence
21
2 - Knowledge
Representation and
Inference
Logic
Paul Dunne GMIT
Artificial Intelligence
22
2 - Knowledge
Representation and
Inference
Logic
• Most important knowledge representation language is
arguably (predicate) logic.
– It allows us to represent fairly complex facts about the world,
– To derive new facts in such a way that guarantees that if the initial
facts were true then so are the conclusions.
• The term “reasoning”, “inference” and “deduction” are
generally used to cover any process by which conclusions
are reached.
• Logic is a formal system which may be described in terms
of its syntax, sematics and its proof theory.
• First we will look at the simpler propositional logic
(sometimes called boolean logic) and then the more useful
predicate logic
Paul Dunne GMIT
Artificial Intelligence
23
2 - Knowledge
Representation and
Inference
Propositional Logic
• Syntax
– Symbols (P, Q, R, S, T ….) are used to represent facts
about the world.
Note- The symbol
• “Paul is tall” represented by “P”
• “Paul likes rugby” represented by “R”
TallPaul could also
have been used
– These simple facts are called atomic propositions.
– You can build more complex sentences by combining
atomic propositions with logical connectives
∧ ( And ), ∨ (Or ), ¬( Not ), → (implicatio n ), ↔ ( equivalenc e )
•
•
•
•
•
Paul Dunne GMIT
Paul is tall AND Paul likes rugby ( P∧R)
Paul is tall OR Paul likes rugby (P ∨ R)
Paul doesn’t like rugby (¬R)
If Paul is tall then Paul likes rugby ( P ⇒ R)
If Paul is tall then Paul likes rugby and vice versa ( P ⇔R)
Artificial Intelligence
24
2 - Knowledge
Representation and
Inference
Propositional Logic - Syntax
Premise or
antecedant
( P ∨ Q) ⇒ R
Conclusion or
consequence
• Implications (⇒) are sometimes called conditionals or rules or ifthen statements.
• Equivalence (⇔) is often called bi-conditional
• A sentence can be formed by combining simlper sentences with one of
the five logical connectives.
• P∨Q
• (P ⁄ Ÿ Q)
• The logical constants True and False are sentences by themselves
• A propositional symbol such as P or Q is also a sentence.
• Wrapping parentheses around a sentence yields a sentence eg (PQ).
Paul Dunne GMIT
Artificial Intelligence
25
2 - Knowledge
Representation and
Inference
Propositional Logic - Semantics
• The semantics of propositional logic allow us to state precisely what
sentences like “P (PQ)” mean.
• It is defined in terms of what is true in the world. If we know P and Q
are tue then semantic of the language will tell us if sentences like “P
(PQ)” are true.
• We can determine the truth or falsity of sentences like these using truth
tables.
X
True
True
False
False
Paul Dunne GMIT
Y
True
False
True
False
XVY
True
True
True
False
Artificial Intelligence
If we have the assertion “raining V sunny”
and the truth value of the assertion is true.
Working backwards from the truth value of
the sentence towards the constituent parts
gives us the meaning of the sentence. Given
the precise semantic of “V” we see that this
means that it can be raining and it can be
sunny when the sentence is true– without
this precise semantics we may not have
interpreted this correctly.
26
2 - Knowledge
Representation and
Inference
Propositional Logic - Semantics
P
Q
P
PQ PQ P
Q PQ
False
False
True
False
False
True
True
False
True
True
False
True
True
False
True
False
False
False
True
False
False
True
True
False
True
True
True
True
The truth table for And and
OR and Not are in close accord
with our intutition of English
words.
In some ways the the
implication connective () is
the most important and the
most confusing. Confusing
because it does not fit our
intutive understanding of “P
imples Q” ( or “If P Then Q”).
There does not have to be any relation or causation between P and Q. The following sentence is a true sentence of
propositinal logic – althought its strange “English” sentence.
( 5 is an odd number) (Tokoyo is the capital of Japan)
Also an implication is true whenever its antecedant (P) is false!
(5 is an even number) (Spiddal is bigger then Rome)
Is true regardless of whether Spiddal is indeed bigger than Rome. Think of P Q as meaning “If P is true then I
am claiming that Q is true. Otherwise I make no claims”.
Paul Dunne GMIT
Artificial Intelligence
27
Propositional Logic – Validity and
Inference
2 - Knowledge
Representation and
Inference
• Truth tables can not only be used to define the connectives but also to
test for valid sentences.
• We make a row for each of the possibe combinations of truth values
for the proposition symbols in the sentence. For each row we calculate
the truth value of the entire sentence. If the sentence is true in every
row then the sentence is valid. Consider ((PH)H) P
Paul Dunne GMIT
P
H
PH
(PH)H ((PH)H) P
F
F
F
F
T
F
T
T
F
T
T
F
T
T
T
T
T
T
F
T
Artificial Intelligence
28
2 - Knowledge
Representation and
Inference
•
•
•
Given this
Infer this
Propositional Logic – Proof Theory
The process by which the soundness of an inference (such as ((PH)H)
P ) is established through truth tables can be extended to entire classes of
of inference. There are certain patterns of inference that occur over and over
again – the soundness of this pattern can be shown for once and for all and
captured in an inference rule which is universally valid.
Once a rule is built then it can be used to make inferences without going
throught the tedious process of building truth tables.
There are many rules of inference here are two.
A, A → B
B
AVB, ¬B ∨ C
AVC
Modus Ponens
If AB is True and A is true then B is necessarily true.
If its raining then Paul puts on his coat. Its raining out then Paul has his
coat on.
Resolution (somewhat like Prolog)
Since B cannot be both true and false, then A or C is true
Given “sunnyV raining, raining V carryumbrella” we can
conclude “sunny V carrycumbrella”
Paul Dunne GMIT
Artificial Intelligence
29
2 - Knowledge
Representation and
Inference
Predicate Logic
• Syntax
– One problem with propositional logic is that it is not
possible to write general statements.
• Paul likes everything
– Predicate calculus allows general statements
– Sentences in predicate calculus are built up from atomic
sentences. This allows facts to expressed as a predicate
name and some arguments.
• likes(paul, rugby)
– So rather than having a single proposition “P” to store
the fact that “paul likes rugby” we have a predicate
logic sentence “likes(paul, rugby)” where we can ‘get
at’ the individual entities ‘paul’ and ‘rugby’
Paul Dunne GMIT
Artificial Intelligence
30
2 - Knowledge
Representation and
Inference
Atomic sentences
Predicate logic - Syntax
likes(paul, rugby)
likes(paul, X)
friends(father(paul), jim)
• The arguments in an atomic sentence may be any term, such as;
• Constant symbols [lowercase such as “paul”]
• Variable symbols [Uppercase such as “X”]
• Function expressions [such as father(paul) which are composed of a functor
followed by arguments, which themselves can be any term]
• Sentences can be constructed by combining atomic sentences with
logical connectives.
• likes(paul, rugby) ⁄ likes(paul, soccer)
• likes(paul, movies) ⁄Ÿ likes(paul, students)
• Sentences can also be formed using the quantifiers ∃(existential
quantifier - there exists) and ∀(universal quantifier - for all). [Note: All
variables should be quantified]
• ∃X(likes(paul,X) Λ contact_sport(X)) ie There exists a contact sport paul likes
• ∃ X ∃ Y (person(X) ⁄ likes(X,Y)) ie Everyone like something
Paul Dunne GMIT
Artificial Intelligence
31
2 - Knowledge
Representation and
Inference
•
•
•
•
•
•
Predicate Logic - Semantics
The semantics of predicate logic is defined in terms of the truth values of
sentences (like propositional logic). Like propositional logic we can use the
truth table to find the truth value of sentences involving logical connectives
from the truth value of each part of the sentence.
However we also need to deal with predicates, arguments and quantifiers. For
simplicity assume that you can assign a truth value to a sentence like
likes(paul, rugby)
The meaning of ∀ can be defined in terms of whether some sentence is true for
all objects in the domain of interest. ∀ X S means that for all X in a domain, S
is true.
∀X likes(X, movies) could be verified if we could go to the domain under
consideration (such as this class) and verify that likes(joe, movies), likes(jim,
movies) etc.
The meaning of ∃ could also be defined in terms of whether some sentences
are true for at least one of the objects in the domain of interest.
So if likes(jim, movies) was the only sentence we could show to be true then;
– ∀X likes(X, movies) would be false
– ∃ X likes(X, movies) would be true.
Paul Dunne GMIT
Artificial Intelligence
32
2 - Knowledge
Representation and
Inference
Predicate Logic
• Inference rules in predicate logic are similar
to those in propositional logic.
• Modus Ponens and Resolution still apply
but in a slightly modified format to deal
with variables and quantifiers.
• All this said the aveage AI programmer
doesn’t have to know predicate logic
semantics or proof procedures.
– However they should know how to represent
things in predicate logic and what predicate
logic expressions mean.
Paul Dunne GMIT
Artificial Intelligence
33
2 - Knowledge
Representation and
Inference
Predicate Logic
• Examples of English sentences converted to predicate logic
– John loves Mary
• Use the verb as the prediate and the nouns as arguments
– loves(john, mary)
– Mary is tall
• Make a one argument predicate
– tall(mary)
– If the car is moving then wear the seat belt
• Generally translate If X then Y into X ⇒ Y
– moving(car) ⇒ wear(seat_belt)
– If Paul is hungry then he eats fruit
– Hungry(paul) ⇒ eats(paul, fruit)
– All Students study
– ∀X student(X) ⇒ study(X)
– These is something small and slimey on the floor
– ∃X ((small(X) ∧ slimey(X) ∧ onfloor(X))
Paul Dunne GMIT
Artificial Intelligence
34
2 - Knowledge
Representation and
Inference
•
•
•
•
Logic and Prolog
The most widely used logic programming language is Prolog. It is based on
predicate logic.
Each statement in a Prolog program corresponds to a sentence in predicate logic.
The notations are different however and Prolog will only allow a horn clause
[only one fact allowed on the right hand side of an implication sign]. This allows
a simpler and more efficient goal directed proof procedure to be executed by
Prolog.
The Prolog system views your (Prolog) computer program as a set of statements
in logic. You then ask the system to prove a statement given the statements in
your program.
Predicate Logic
Prolog program
father(jim, fred)
father(jim, fred).
father(joe, jim)
father(joe, jim).
∀XYZ((father (X,Z) ∧ father(Z,Y)) ⇒
grandfather(X,Y) :-
grandfather(X,Y)
father (X,Z), father(Z,Y).
If Prolog was asked “grandfather(joe, fred)” it would repsond “true”.
Paul Dunne GMIT
Artificial Intelligence
35
2 - Knowledge
Representation and
Inference
Logic
• Logic is not only used as a knowledge
representation language but also;
– To communicate AI theories within the AI
community
– As a basic of AI programming languages
– To give the meaning of natural language
sentences in a natural language understanding
system.
– To define the semantics of other simpler
representation languages (such as frames).
Paul Dunne GMIT
Artificial Intelligence
36
2 - Knowledge
Representation and
Inference
Logic Summary
• Predicate logic is a powerful way to represent and reason
with knowledge.
• Negation, disjunction and quantification are easily
represented using logic (which is not true for frames)
• Available proof procedures and inference rules means that
a very wide range of inferences are possible (contrast this
with frames where only simple inheritance based inference
is possible)
• Logic has a problem in dealing with uncertainty, defaults
and beliefs. Where new logic has been developed to deal
with these issues the logic can become overly complex.
Paul Dunne GMIT
Artificial Intelligence
37
2 - Knowledge
Representation and
Inference
Logic - Questions
• Give it a go.
– Convert the following to the language of
predicate logic.
•
•
•
•
•
•
•
Paul Dunne GMIT
Every apply is either green or yellow
No apple is blue
If an apple is green then its tasty
Every man likes a tasty apple
Some people like garlic
Fido is a doberman and a good dog.
All basketball players are tall
Artificial Intelligence
38
2 - Knowledge
Representation and
Inference
Logic - Questions
Convert the following to the language of predicate logic.
• Every apply is either green or yellow
∀X (apple( X ) ⇒ green( X ) ∨ red ( X ))
• No apple is blue
∀X (apple( x) ⇒ ¬blue( X ))
• If an apple is green then its tasty
∀X (apple( X ) ⇒ tasty ( X ))
• Every man likes a tasty apple
∃X ( person( X ) ⇒ likes( X , garlic))
• Some people like garlic
∀X∃Y (man( X ) ∧ tastyApple(Y ) ⇒ likes ( X , Y ))
• Fido is a doberman and a good dog.
dog ( fido) ∧ gooddog ( fido)
• All basketball players are tall
∀X (basketballPlayer ( X ) ⇒ tall ( X ))
Paul Dunne GMIT
Artificial Intelligence
39
2 - Knowledge
Representation and
Inference
Rule Based Systems
Paul Dunne GMIT
Artificial Intelligence
40
2 - Knowledge
Representation and
Inference
Rule-Based Systems
• Instead of representing knowledge in a relatively declaratively, static
way (as a set of things that are true) rule based systems represent
knowledge as a set of rules for different situations that
– tell you what to do
– or what you can conclude
Control Scheme (Intepreter – controls application of rules)
Condition action rules
Database of facts
IF alarm_beeps THEN
(normally representing
things that are true)
ADD smoky
IF hot AND smoky THEN
alarm-beeps
hot
ADD fire
Note: IF-THEN rule based systems are independent chunks of knowledge, to be invoked, as
required, by the interpreter. In C or Java they are treated as a sequence of instructions. They
are like implications in logic, alarm_beeps ⇒ smoky .
Paul Dunne GMIT
Artificial Intelligence
41
2 - Knowledge
Representation and
Inference
•
•
Rule-Based Systems
The interpreter can use one of the following methodologies (reasoning
mechanims) to connect a problem with a solution. This is called inference.
Forward chaining (CLIPS and Jess)
• Start with some initial facts and the interpreter keeps using rules to
draw new conclusions (or take certain actions) given those facts.
• Useful when you know facts but don’t know what conclusions can be
drawn (or what likely conclusions can be drawn)
– “The ground is wet. What might be happening”?…
– Backward chaining (PROLOG)
• You start with some hypothesis (or goal) you are trying to prove and
keep looking for rules that allow you to conclude that hypothesis –
possibly setting new subgoals to prove as youi go along.
• Focus on a hypothesis.
– “Should I wear a raincoat”
•
Chaining gets it name from chaining inferences (modus ponens) together to
form a chain of inference.
– elephant(X) ⇒ mammal(X)
– mammal(X) ⇒ animal(X)
Paul Dunne GMIT
Artificial Intelligence
42
2 - Knowledge
Representation and
Inference
CONCLUSIONS
(Could be new facts , output etc)
R8
Initial
Hypothesis
Top Down
Forward and Backward chaining
H
R9
J
J
K
H
R5
R6
Intermediate
Hypothesis
(sub goals)
R7
H1
H2
H3
AND
I
Intermediate
fact
H
R1
H4
H5
H6
H
R2
R3
R4
AND
A
B
C
D
E
Initial Facts - Given
Paul Dunne GMIT
F
G
BOTTOM-UP
AND
Evidence
Artificial Intelligence
A
B
C
D
E
The original hypothesis may be
proved by the evidence
43
2 - Knowledge
Representation and
Inference
Forward Chaining and Backward
Chaining comparison
Forward Chaining
Backward Chaining
Planning monitoring and control
Diagnosis
Present to future
Present to past
Antecedent to consequence
Consequenc to antecedent
Data driven, bottom up reasoing
Goal –driven , top down reasoning
Work forward to find what solutions
follow from the facts
Work backwards to find facts that support
the hypothesis
Good for breadth first search
Good for depth first search
Antecedents determine where the search
goes
Consequences determine search
Exaplanations are not easily facilitated
Explanation facilitated
Paul Dunne GMIT
Artificial Intelligence
44
2 - Knowledge
Representation and
Inference
•
Forward chaining rule-based systems
Facts are held in working memory which is being continually updated as rules
are invoked.
Rules represent possible actions to take when specified facts occur in the
working memory.
The interpreter controls the application of the rules.
•
•
Current Facts
Condition-Action Rules
Working M emory
Stores facts
(positive literals
no variables)
Interpreter
Controls Application of Rules
IF ... T HEN ADD ...
IF ....T HEN DELETE...
IF....T HEN PRINT....
Add New Facts
Delete Facts
Print
Message
Paul Dunne GMIT
Artificial Intelligence
45
2 - Knowledge
Representation and
Inference
Forward chaining rule-based systems
• The interpreters cycle of activity is
sometimes known as the recognize-act
cycle.
– Fire a rule
• Which rules conditions (IF parts) hold given the
current state of working memory. (Match phase)
• Select a rule (Conflict resolution phase)
• Perform action in conclusion of rule (THEN part)
which may modify working memory (Act phase)
– New working memory results
– Repeat (until no rule fires or Halt encountered)
Paul Dunne GMIT
Artificial Intelligence
46
Forward chaining rule-based systems
Example
2 - Knowledge
Representation and
Inference
Current Facts
Condition-Action Rules
Interpreter
Controls
Application of
Rules
W orking M emory
F1: alarm_beeps
F2: hot
R1: IF hot AND smoky THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IFfire THEN ADD switch_on_spinklers
Add New Facts
Delete Facts
Print
Message
Current Facts
Condition-Action Rules
Interpreter
W orking M emory
F1: alarm_beeps
F2: hot
F3:smoky
Controls
Application of
Rules
R1: IF hot AND smoky THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IFfire THEN ADD switch_on_spinklers
Add New Facts
Delete Facts
Print
Message
Paul Dunne GMIT
Artificial Intelligence
47
Forward chaining rule-based systems
Example
2 - Knowledge
Representation and
Inference
Current Facts
Condition-Action Rules
Interpreter
Controls
Application of
Rules
W orking M emory
F1: alarm_beeps
F2: hot
F3:smoky
R1: IF hot AND smoky THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IFfire THEN ADD switch_on_spinklers
Add New Facts
Delete Facts
Print
Message
Current Facts
Condition-Action Rules
Interpreter
W orking M emory
F1: alarm_beeps
F2: hot
F3:smoky
F4:fire
Controls
Application of
Rules
R1: IF hot AND smoky THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IFfire THEN ADD switch_on_spinklers
Add New Facts
Delete Facts
Print
Message
Paul Dunne GMIT
Artificial Intelligence
48
Forward chaining rule-based systems
Example
2 - Knowledge
Representation and
Inference
Current Facts
Condition-Action Rules
Interpreter
W orking M emory
F1: alarm_beeps
F2: hot
F3:smoky
F4:fire
Controls
Application of
Rules
R1: IF hot AND smoky THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IFfire THEN ADD switch_on_spinklers
Add New Facts
Delete Facts
Sprinklers
The example illustrates that the order in which rules fire depends on what is in working memory,
and not on the sequence of the rules.
A problem arises when more that one rule can fire based on what is in working memory.
F1: alarm
R1: If alarm THEN fight_fire
F2: evacuate
R2: If alarm THEN evacuate
It is very important which rule is chosen to fire when there is a choice!
Paul Dunne GMIT
Artificial Intelligence
49
2 - Knowledge
Representation and
Inference
•
Forward chaining
Conflict resolution strategy
–
Depth First
(CLIPS default)
Breadth First
A forward chaining system will have some conflict resolution strategy to decide which rule to
fire. Common strategies are;
• Prefer to fire rules that involve facts that have been recently added to working memory
• Prefer to fire rules that have more specific conditions. (eg IF a AND b AND c THEN d
instead of IF a THEN z).
• Allow the user to prioritize rules.
• Fire all applicable rules at once.
•
Reason maintenance
– Some sophisticated techniques have been developed to to update working memory
to remove facts whose justification for being there has become invalidated. (The
facts that caused the rule to fire to add this fact no longer hold => no justification
for this fact to be in the working memory).
•
Pattern matching
– If we have more complex fact, temperature(kitchen, hot) and we want to match it
aginst a general rule, IF temperature(R, hot) THEN alarm, then we will employ
pattern matching (matching the pattern temperature(kitchen, hot) against IF
temperature(R, hot) THEN alarm) which will greatly increase the flexability of our
rules.
Paul Dunne GMIT
Artificial Intelligence
50
2 - Knowledge
Representation and
Inference
Forward chaining – Expert systems
• CLIPS
– One expert system tool based on forward chaining is
CLIPS (C language Integrated Production System)
– Allows a wide variety (7) of conflict resolution
strategies to be selected and limited facilities for reason
maintenance.
• Jess
– An expert system shell (essentially implements CLIPS
in Java). Jess Supports the development of rules based
expert systems. Jess v6.0 is compatible with Java 1.2
and in particular the JDK 1.3.
Paul Dunne GMIT
Artificial Intelligence
51
2 - Knowledge
Representation and
Inference
Backwards chaining
• Suppose you have a hypothesis to test, then
forward chaining systems may not be most
efficient way of determinig the truth of your
hypothesis.
• The forward chaining system would keep on
chaining until no more rules applied or until your
hypothesis was added to the working memory.
• Ultimately a lot of of unnecesary conclusions
would be added to the working memory.
Paul Dunne GMIT
Artificial Intelligence
52
2 - Knowledge
Representation and
Inference
–
–
–
–
–
–
–
•
Backwards chaining
R1: IF smoky AND hot THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IF alarm_beeps THEN ADD ear_plugs
R4: IF fire THEN ADD switch_on_sprinklers
R5: If smoky THEN ADD poor_visibility
F1: alarm_beeps
F2: hot
In a forward chaining system the following facts would be added
• Smoky, fire, ear_plugs, switch_on_spinklers, poor_visibility
– This would be overkill if were simply interested in knowing whether we should
switch on the spinklers.
•
Backward chaining focuses on the hypothesis in question.
– You start with a goal we want to prove
• switch_on_spinklers
– System checks if the goal matches any of the given facts – if then does then the
goal succeeds. If not then the system will look at the rules actions (conclusions) and
see if any of those match the goal.
– If the conclusion matches the goal the system will then try to pove the
preconditions of the rule (in effect a new sub-goal has been defined).
– Backward chaining systems keep track of goals that they are trying to prove (they
do not need to update a working memory)
Paul Dunne GMIT
Artificial Intelligence
53
2 - Knowledge
Representation and
Inference
•
Backward chaining - Example
The initial goal is;
– G1: switch_on_spinklers
• Not in initial facts
• Matches conclusion R4 > G2
– G2: fire
• Not in facts
• Matches conclusion R1 > G3, G4
– G3: smoky G4: hot
• G3 Matches conclusion of R2 > G5
– G4: hot
G5: alarm beeps
• G5: Matches facts
• G4 : Matches Facts
• > G3 satisfied
• > G2 satisfied
• > G1 satisfied !!!
Paul Dunne GMIT
Artificial Intelligence
R1: IF smoky AND hot THEN ADD fire
R2: IF alarm_beeps THEN ADD smoky
R3: IF alarm_beeps THEN ADD ear_plugs
R4: IF fire THEN ADD switch_on_sprinklers
R5: If smoky THEN ADD poor_visibility
F1: alarm_beeps
F2: hot
G1
G2
G3
G5
switch_on_spinklers
R4
fire
R1
smoky
R3
G4
hot
F2
alarm_beeps
F1
54
2 - Knowledge
Representation and
Inference
•
•
•
Backward chaining - Implementation
A backward chaining system may be implemented using a stack to record the
goals that are still to be satisfied. You repeatedly try to pop a goal ff a stack to
try and prove it. If its in the initial set of facts – its proved. If its matches a
rules conclusions then the preconditions are pushed onto the stack as new
goals.
The overall goal succeeds if all the sub goals are removed off the stack.
If more than one rules conclusions could be matched against the goal then
search techniques must be employed.
Note:
G 5
G 4
G 1???
G 3
G 2
Paul Dunne GMIT
Prolog uses backward chaining (with
depth first search) to try and prove
things. This makes it a good choice
for prototyping simple expert
systems.
Artificial Intelligence
55
2 - Knowledge
Representation and
Inference
Summary
• Knowledge representation languages provide high level representation
formalisms to represent the knowledge required for AI problem
solving.
• A good language should be natural, clear and precise, allow you to
represent what is required, and support the sound inference of new
facts.
• Frames and semantic nets represent knowledge as an organised
collection of objects with attributes, arranged in a hierarchy. If an
object is a sub class of another it may inherit its attributes. They are
limited in what can be represented and inferred, but provide a natural
and efficient representation scheme.
• A logic, and particular predicate logic, may be used as a precise and
formal language able to represent a fairly wide range of things. A
Logic may also be used to describe the semantics of other formalisms.
• Rule based systems allow knowledge to be represented as a set of more
or less independent IF-THEN or condition action rules, statting what
action to take given different conditions. Reasoning can be controlled
using a forward or backward chaining interpreter.
Paul Dunne GMIT
Artificial Intelligence
56