Semantic Networks


Touretzky Reading: None


This lecture is stolen from Kurt Eiselt. Shamlessly, I might add.

We've been looking at tree-like data structures a lot lately, 
and by now you're probably wondering what is the big 
fascination with these things.  The reason we get all tingly 
about trees, or any sort of hierarchical data structure, is 
that they're great ways to organize knowledge.  Traditional 
linear structures, like files with lots of records, or even 
to some extent simple linear linked lists like those we've 
been using in this class, make it difficult to represent the 
wide variety of relationships which exist between entities in 
the world.  And many times when we're using computers, we're 
really trying to build computational models of some aspect of 
the real world.

Networks in the dictionary

We see hierarchical organizations in the real world all the 
time.  They may not be "pure" hierarchies, but they're 
hierarchical in spirit at least.  It might be easier to think 
of these things as "networks" instead of hierarchies.  Take 
for example the common dictionary.  At first glance, it looks 
like a very linear organization of the words in our language.  
But what a dictionary really specifies is a very complex and 
somewhat hierarchical map of the relationships between the 
words in our language.  Here are some sample definitions:

dog:        any of a large and varied group of domesticated
            animals related to the fox, wolf, and jackal

chihuahua:  any of an ancient Mexican breed of very small dog
            with large, pointed ears

bird:       any of a class of warm-blooded, two-legged, 
            egg-laying vertebrates with feathers and wings

penguin:    any of an order of flight-less birds found in the
            Southern Hemisphere, having webbed feet and 
            paddle-like flippers for swimming and diving

ostrich:    a large, swift-running bird of Africa and the 
            Near East; the largest and most powerful of 
            living birds; it has a long neck, long legs, two
            toes on each foot, and small useless wings

canary:     a small yellow songbird of the finch family, 
            native to the Canary Islands

Notice that these definitions all relate the thing being 
defined to some larger class of things, and then goes on to 
try to distinguish that thing from other similar things.  
Note also that as the things being described stray further 
and further from what we might think of as being norms or 
stereotypes, the definitions get longer and more detailed.  
For example, compare the canary (a stereotypical bird) to an 
ostrich (an extremely non-stereotypical bird).

When we take the time to look at the dictionary in this way, 
we uncover what is essentially a bunch of pointers from one 
word to others.  Because I'm trying to prove a point here, 
I've focused on the animal kingdom, knowing that zoologists 
spend a lot of time building these "taxonomies", or 
hierarchies of what is related to what.  But it works for 
things other than animals:

chair:      a piece of furniture for one person to sit on,
            having a back and, usually, four legs.

And so on.  You can look up more if you like.  Try "couch", 
"sofa", "table", "ottoman", and whatever.  No matter what 
noun you look up, you'll find the same sort of pattern: 
relate this word to a larger class of things, then describe 
some features to distinguish this thing from similar sorts of 
things.  (Of course, I've purposely avoided dealing with 
another big class of words here--a class you know as "verbs".  
We'll save that topic for CS 4344, the course on natural 
language understanding.  If you're looking for a real LISP 
programming challenge, make sure you join me in that class in 
the future.  And don't even think about getting any sleep 
that quarter.)

The dictionary writers don't always help as much as we might 
like, however:

rock:       a large mass of stone

stone:      the hard, solid, nonmetallic mineral matter that
            rock is composed of

Except for the extra bit of information that stone is mineral 
matter, all we know here is that rock is made of stone, and 
stone is what makes up rock.  Sigh.

In any case, we can use our high-level data abstraction, the 
directed graph (from those notes that still aren't posted 
because there are too many figures), to make these 
relationships a bit more visual.  For example, from the bird 
definitions, we can construct the following abstraction:

                   vertebrate
                        ^
                        | is-a
                        |
                        |          has-part
                        |        /------------- wings
                        |       /  reproduction
                        |      /--------------- egg-laying
                        |     /    body-temp
                        |    /----------------- warm-blooded
                      bird--<      no. of legs
                      ^ ^ ^  \----------------- 2
                     /  |  \  \    covering
               is-a /   |   \  \--------------- feathers
                   /    |    \  \  movement
       color      /     |     \  \------------- flight
yellow ------canary     |      \
       size  /          | is-a  \ is-a
 small -----/           |        \       movement
                        |        ostrich---------- run
            movement    |              \  size
      swim ----------penguin            \--------- big

OK, so I fudged this a little.  I had to infer that the fact 
that birds have wings means that they move around by flying; 
the dictionary writers didn't tell us that.  And I had to 
infer that the fact that birds were egg-laying told us 
something about their reproductive processes, and so on.  But 
you get the idea, no?

Networks in your head

These sorts of knowledge hierarchies show up elsewhere.  
Independent of this dictionary organization, psychologists in 
the 1960s theorized that humans organize at least some of 
what they know in a similar hierarchical fashion.  For 
example, they said, a person's knowledge of things in the 
world might be organized along these lines:

                                all things
                                 /      \
                                /        \
                               /          \
                              /            \
                physical objects          abstract objects
                  /           \                /     \
                 /             \              /       \
                /               \           time   thought
               /                 \
       inanimate                 animate
        objects                  objects
         /   \                   /     \
        /     \                 /       \
       /       \               /         \
  inorganics   plants    mammals         birds
     /   \               /     \         /    \
    /     \             /       \       /      \
  rock   car          dog      human  canary  ostrich

This hierarchy is by no means complete, nor is it exactly 
what the psychologists, Collins and Quillian, had proposed, 
but it's sufficient for our purposes.

If we think of all the upward links as being relationships of 
the form, "the thing below is a subtype of the thing above," 
then we have something called a "type hierarchy".  And in 
fact, we could put the label "S" on all those upward links, 
to indicate that the thing below is a "S"ubtype of the thing 
above.  Folks in the world of artificial intelligence don't 
use the "S" or "subtype" terminology very much; instead, AI 
folks use the label "is-a" in hierarchies like this, as in "a 
canary is-a bird".  So these hierarchies can also be called 
"is-a hierarchies".  Note that we went ahead and used that 
"is-a" label in the first diagram above.

In any case, Collins and Quillian backed up their theory with 
experiments based on this premise:  If people really store 
knowledge in hierarchical form, then if they're asked the 
right questions, we should note significant differences in 
the time it takes those people to respond correctly.  For 
example, the time to answer "yes" to "Is a canary a bird?" 
should be less than the time to answer "yes" to "Is a canary 
an animate object?"

The experiments did in fact generate the right numbers, and 
for awhile everyone thought the question of how human memory 
is organized had been answered.  But other experimenters had 
difficulty replicating these results, so there was some 
controversy about just how Collins and Quillian obtained 
their results.  Nevertheless, hierarchical models of human 
memory are still very popular, although they are considerably 
different in their organization than the one we've just 
looked at.

I can see now that I left all the directional arrows off the 
upward links on the second diagram.  Sorry, but if you think 
I'm gonna go back and tweak all those, forget it.  Draw your 
own arrows--they all point up.  And that's an interesting 
abstraction in itself, which in turn generates the question, 
"Why do all the arrows point up?"  The reason, at least from 
either a psychological or AI point of view, is that humans 
typically are better at answering questions like "Is a dog a 
mammal?" than questions like "Name all the mammals you know."  
In other words, people are better at recognition than recall 
or retrieval.  The upward arrows in our diagrams suggest that 
it would be easier to start at the "dog" node and traverse 
the link up to the "mammal" node to answer "Is a dog a 
mammal?", than it would be to start at the "mammal" node and 
try to traverse all the downward links in an effort to 
enumerate all the different types of mammals.

Inheritance

We can get more utility out of our hierarchies if we add 
important and distinguishing properties (or features or 
attributes, all of which are indicated by the links that tend 
to go horizontally rather than vertically), like we did in 
the dictionary example:

                   vertebrate
                        ^
                        | is-a
                        |
                        |          has-part
                        |        /------------- wings
                        |       /  reproduction
                        |      /--------------- egg-laying
                        |     /    body-temp
                        |    /----------------- warm-blooded
                      bird--<      no. of legs
                      ^ ^ ^  \----------------- 2
                     /  |  \  \    covering
               is-a /   |   \  \--------------- feathers
                   /    |    \  \  movement
       color      /     |     \  \------------- flight
yellow ------canary     |      \
       size  /          | is-a  \ is-a
 small -----/           |        \       movement
                        |        ostrich---------- run
            movement    |              \  size
      swim ----------penguin            \--------- big

If we then allow what is called "inheritance" of these 
features or attributes, we get a big win.  Inheritance means 
that one type inherits or takes on the properties of its 
supertypes, assuming that there's no information to the 
contrary.  So, for example, we know that a canary's primary 
mode of movement is by flight, even though we don't see that 
explicitly represented as a property of canaries, because we 
can see that a bird (the supertype of canary) moves by 
flight.  The canary subtype inherits the property of flight 
from the bird supertype.  If we didn't allow inheritance in 
networks like this, we'd have to attach the property of 
movement by flight to every appropriate node in the network.  
Not only that, but we'd have to repeat every specific 
property everywhere that we wanted it in the network, and 
that would cost us a humongous amount of storage space.  So 
inheritance buys us economy of representation, although any 
program that takes advantage of inheritance is going to have 
to do some extra work to search around the network and find 
out which properties are supposed to be inherited.

We can also make exceptions, and say that a penguin moves 
primarily by swimming, even though it's a bird.  We add that 
property explicitly at the "penguin" node, and it overrides 
the default property of movement by flight at the "bird" 
node.  So, in an "inheritance hierarchy" such as this, 
properties are only passed from supertype to subtype when 
there's no explicit information to the contrary stored with 
the subtype.

Have you seen anything resembling inheritance hierarchies 
before?  Well, if you've taken CS 2390, or you know anything 
about object-oriented programming, it should be obvious to 
you that all we've done here is define a set of data objects.  
Yes, it's true, long before there was Smalltalk, C++, or even 
CLOS (Common LISP Object System), cognitive psychologists of 
the 60s, and their counterparts in the land of artificial 
intelligence, were laying out the foundations of what would 
eventually become object-oriented programming.  

How to represent your networks in LISP

Hey, it's simple.  Use an association list.  You remember a-
lists, right?

(defun *database* ()
  '((canary  (is-a bird)
             (color yellow)
             (size small))
    (penguin (is-a bird)
             (movement swim))
    (bird    (is-a vertebrate)
             (has-part wings)
             (reproduction egg-laying)
                :
                :   )
   )
)

You'd use the "assoc" function with a key of "canary" to 
extract all the information about the "canary" type.  Take 
the "rest" of that result to give you a list of just the 
links paired with what's at the end of those links.  Then use 
the "assoc" function with a key of "is-a" to find out which 
node is the supertype or parent of "canary".  Any pair that 
doesn't start with "is-a" is just a property explicitly 
represented at that node, which could be inherited by any 
subtype below that node.  Nothing to it.

Networks and relational databases

Everything we've shown you so far has been purely tree-like 
in form, but as we've said, that's clearly not necessarily 
going to be true.  In fact, it's much more likely that the 
organization in these structures will be much more 
convoluted.  Consider some of the relationships which may 
exist in a small company that makes cough drops:


          Smith     options
         Brothers ---------- pay plans
            |\                /     \
            | \              /       \
            |  \      salaried     hourly
            |   \      /           /
       dept.|    ---- /           /
            |        \           /
            |  pay  / ----      /pay
            |  plan/      \    /plan
            |     /   dept.\  /
            |    /          \/
       engineering      shipping
          /   \           /   \
         /     \         /     \
        /       \       /       \
     Arnie    Brian   Chuck    David
     Smith    Smith   Smith    Smith

Ugh.  Anyway, welcome to the exciting world of relational 
databases.  Just like object-oriented programming, relational 
database work is something that evolved from artificial 
intelligence ideas about how to organize knowledge (although 
you'll never get a relational database person to admit this), 
which in turn evolved from ideas in cognitive psychology.

Networks on TV (not ABC, CBS, or NBC...or even Fox)

Network abstractions have even been used by popular 
publications to explain what's going on between the 
characters in television shows.  For example, at the height 
of the popularity of the show "Twin Peaks" a couple of years 
ago, both People and Newsweek published very detailed network 
representations of the relationships between the many 
inhabitants of the town of Twin Peaks.  These diagrams are 
*really* complex, so I won't bother to reproduce them in 
ASCII here (whew!), but you could tell just by looking at 
them that these networks are far from tree-like (i.e., 
there's no obvious hierarchy, and there are most definitely 
some cycles.  Oh, by the way, here's some more 
terminology...you'll also find structures like these called 
"semantic networks" or "relational networks", but you don't 
need to worry about that much until you take CS 3361.).  But 
the fundamental ideas about organizing knowledge in terms of 
things and relationships between things are still there, as 
are the fundamental ideas about how to traverse these 
structures, which is the topic of the next set of lecture 
notes.

But in summary, let's revisit the original question, "Why are 
we getting so excited about these trees and/or networks?"  As 
we've seen, the answer is that we can model so many diverse 
things with them.  In just this brief time, we've seen how we 
can model the organization of dictionaries, human memory 
(maybe), a small company, and a fictional town, all using the 
same basic nodes-and-links representation scheme.  
Furthermore, in so doing, we've shown that this common thread 
runs through cognitive psychology, artificial intelligence, 
object-oriented programming, and relational databases, just 
to name a few areas of academic endeavor.  See, there really 
is some method to the madness.  Trust me.



Copyright 1994 by Kurt Eiselt.  All rights reserved.


Back To The CS2360 Home Page
Ian Smith (iansmith@cc.gatech.edu)

Last Modified 5 Feb 95 by Ian Smith (iansmith@cc.gatech.edu)