This paper provides a general introduction to cognitive psychology. It gives a brief history of cognitive psychology, its problems, and its reemergence in part due to the study of Artificial Intelligence. Artificial Intelligence is often referred to simply as AI and will be so referred in this paper. This paper then attempts to cover some of the main areas that have been the focus of study in cognitive psychology. Each of these is a field of study in itself and so a brief description is all that is given.


The ancient Greeks had an interest in human cognition. The philosophers Plato and Aristotle speculated on thought and consciousness. Plato was a rationalist. He believed learning was actually recollection. Plato started the ideas of Idealism, Rationalism, and Dualism. Aristotle was an empiricist. He believed perception and observation played an essential role in the acquisition of knowledge. The empiricist and rationalist ideas of knowledge acquisition ( and the debate between them ) are still with us to this day. In the 18th century, Hobbes, Lock, Hume, and Mill, who also were empiricists, developed the idea that since certain properties are repeatedly experienced together in our impressions, this repeated association creates a disposition in the mind to associate the corresponding ideas. This associational account of knowledge has had significant influence in computer science, notably semantic networks and MOPS.

The science of cognitive psychology did not develop until the nineteenth century. Some reasons for this are the egocentric, religious, and confused ideas we had about ourselves during this time ( like its better now! ). Few scientists before the nineteenth century believed the workings of the mind could be subject to scientific analysis. This is understandable since Cognitive science is unique among the sciences in that it is the mind studying itself.

In 1879 William Wundt established the first psychology laboratory in Leipzig, Germany. Wundt was studying what we now think of as cognitive psychology. He was interested in mental phenomena, primarily consciousness; what it was, how we could analyze it, and how we could determine its structure. The method used in many of these early studies was introspection. Introspection is basically asking a subject to describe their conscious experiences to some question or problem involving association. This description that they gave was used to understand their thought process. But, there were many problems with this method. Many subjects found they could not "put into words" or accurately describe their thought process. Practitioners of the introspective method often gained results that supported the theory they were seeking to establish, which called in to doubt the objectivity of this method. Also, the introspective method was not replicable and as such was not good science. The problems that were evidenced by the introspective method still plague us today in our study of the mind.

The behaviorist tradition in America during the 1920’s was a reaction against not only the introspective method, but also the study of consciousness in general. Behaviorists believed that for psychology to be a true science it could not study mental events.

" Behaviorism claims that consciousness is neither a definite nor a usable concept. The Behaviorist who has been trained always as an experimentalist, holds further that the belief in the existence of consciousness goes back to the ancient days of superstition and magic. [Watson, 1930].(1)"

What an overreaction this was. They should have seen the flaws in the method but understood this as a result of the complexity of studying the workings of the mind. Rather

Then reject outright the study of mental events, new methods should have been developed and tested. The behaviorists rejected the study of mental events because they were interested only in objective behavior, behavior that could clearly be observed and measured. Perception itself could be argued to be a mental event, but the Behaviorists believed that perception was "clear" in that the organism did not influence its own perceptions. In any case the behaviorist tradition and its prevalence in psychology did little to further the science of cognition.

Cognitive Psychology did not see great advance until World War II with the development of information theory and the study of information processing through information -processing analysis. Information-Processing analysis is the tracing of a sequence of mental operations and their products in the performance of a particular cognitive task. This type of analysis is common in all of cognitive psychology and while there are other approaches, the information processing approach is still the dominant viewpoint today.

The advent of computer science, most notably the study of artificial intelligence, provided an enormous influence, though indirectly, on the study of cognitive psychology. AI scientists such as Allen Newell and Herbert Simon began writing programs that played games (chess) and solved problems. These scientists had begun to think of the human mind as a model to replicate. One example of the crossover between these AI scientist and psychology is the fact that Newell and Simon had a paper published in the journal "psychology review" (1956) one of the most prestigious psychological journals to this day. The AI scientists wrote programs that seemed, or were, intelligent, and were doing studies comparing these programs to the performance of humans. They had broken through the Behaviorists arguments against studying cognition. One can not argue modeling a phenomenon is unscientific. They had justified the study of mental phenomena. The importance of this indirect effect on psychology can not be overstated.

In large part because of the work of AI scientist Cognitive Psychologists began to think about people as information processors or symbol manipulators. They also found in computer science an abstracted level of analysis. An example: To try to understand what happens in the mind when a person solves a problem on the neuron level is to complex and would probably not be of much interest to a psychologist even if it could be done. The psychologists saw a parallel in what happens inside a computer. If I tried to explain a program on the level of the individual components that make up the machine this would not be very satisfactory. Instead programs are understood on the level of the computer code. If x do a, If y do b, If z do a and b.


What follows are brief descriptions of some subjects that have been the main focus of cognitive psychology in its short history.


The representation of knowledge is of course a main focus when studying the mind. The representation of mental imagery, information, and our use of schemas and prototypes are all areas that have received extensive study. The study of mental imagery attempts to find out how we represent and process spatial relationships. There is little agreement on the representation underlying imagery and so the focus has been on the kinds of operations that can be performed on mental imagery.

One such operation is mental rotation. In an experiment by Shepard and Metzler (1971) subjects were given two pictures, each one being a three-dimensional object. The task was to determine if the objects were the same. Sometimes the objects were different, sometimes the objects were the same except one was rotated. The greater the degree of rotation, the longer it took the subjects to determine they were the same. Subjects reported they rotated the objects "in their head" trying to make a match. This was consistent with the data, which also indicated the subjects were solving the problem by mentally rotating the visual images. The fact it took longer to solve comparison in which there was a greater rotational difference suggest the subjects self-reporting was accurate. Notice the difference between this and introspection. With verbal reporting all data must be consistent, and data is gathered by many means besides just the verbal report of the subject, visual clues such as a subject rotating his hand while attempting to solve the problem are examined along with many others.

There are two main theories as to how information is represented in memory. In the dual-code theory, information is stored in memory as visual images and verbal representations. Other memories may be rooted in touch, taste or smell, but visual and verbal memories appear to be dominant. In this theory it is believed that no memories exist that are not rooted in a particular sensory modality. Another theory is that information is represented in terms of propositions. This means that the meaning of an event is represented. Proposition, in this sense, has a mathematical or logic context in that it means the smallest unit about which a decision of true or false can be made. For example: "Strider and Frodo went climbing on a sunny afternoon" can be broken into three propositions "Strider went climbing", "Frodo went climbing" and "the afternoon was sunny" The meaning of these might be represented by lists of relationships and arguments. For example:

(Climbing-- Strider, Frodo)

(Sunny-- afternoon)

The exact wording is not preserved but the meaning is. (Man I am having prolog deja vu). The meaning also may be represented with propositional - networks (I swear these cognitive scientists stole everything they know form AI). Studies by Collins and Quillian (1969), on the time retrieval of information from memory strongly support the idea of representation with propositional-networks. But other studies, such as one conducted by Santa (1977) showed the importance of linear order for verbal memory and spatial configuration for visual memory. These findings seem to challenge the propositional-network representation. The representation of information in memory is probably a mix of the dual-code theory and the propositional-network. To me, intuitively, the propositional-network theory seems valid.


Schemas and Prototypes are other areas of study on which Cognitive Psychology has focused. A schema can be thought of as patterns that determine how perceptual elements are brought together and organized. Prototypes can be thought of as a generic instance of a type. A schema is sort of a blueprint for some operation or category.

We compare things to this blueprint in order to classify and organize them. Labov (1973) did studies that show schemas do not have well defined boundaries, such as, when does a cup become a glass, or a bowl, or a pot. The schema boundaries are constantly changing and the schemas themselves revised to fit our needs. Schemas are not just for type categorization. We have schemas for ideas, behaviors, learning etc. One such type of schema is referred to as a story schema. A story schema is like a script for a movie or play. When someone tells a story the listener often "pictures" it in his or her head. If the story were " I went to a nice restaurant last night and really enjoyed myself," the listener would "fill in" the details according to there own schema. The listener adds the building, the waiters, the music, the decor of the room; all sorts of things according to his schema for "nice restaurants ". The part "really enjoyed myself" is not left blank by the listener because he may not know how the storyteller "enjoys himself" the listener fills it in with how HE enjoys himself. An interesting study by Bartlett (1932) was done in which he told a story to English subjects that did not fit well within their schema. The story was taken from Canadian Indians. The subjects, when asked to recall the story, had changed it to fit in with their schema. They had not just forgotten parts of it but systematically altered it to fit their schema. Schemas enable us to operate in our world. They organize and direct our acquisition of knowledge, our behaviors, and our actions.


As I looked into memory and learning, memory specifically, I kept having to remind myself that I was reading a Cognitive Psychology text about the mind and not a

Computer science book about computers. Short-term memory is transient in that it fades away and does not necessarily become part of long-term memory. It is thought of as working memory (they’re taking about RAM). Short-term memory is in the same location as long term memory but in a different state. Long term memory can be retrieved and placed in short term memory for use. It is not actually retrieved or placed anywhere but activated as working memory. Connections made between nodes in short term memory are not permanent and do not continue when the information is deactivated. The connections or associations can become part of long term memory but are not automatically so made. Often when you make a new association between items in short term memory the next thought is "I have to remember that". It seems to me that a good analogy is that these items in short term memory are "pointers" to the actual memories and so connections made between these "pointers" are lost when the pointers go away unless effort is made to connect the actual instances of the memories themselves. George Miller (1956) introduced the idea of a chunk, which is a unit in long-term memory. He believed that we could store seven (plus or minus two) "chunks" in short term memory. Of course these "chunks" could themselves be large data structures. The speed at which subjects can retrieve information is tied to how well the subject knows the material.

If a memory has been activated then this activation will speed up retrieval times for an associated memory. If questions about animals are asked, subsequent questions about animals will be answered faster than if previous questions were about a non associated memory.

The difference between recall and recognition is that while recall is trying to get an association in one direction, recognition offers many paths to association. For example: The question "who was in the cave" requires that you search from cave to find associations to who was in it. But, with the question "Were Strider, Frodo and Sam in the cave" any path from Strider, Frodo, Sam or cave could be followed to associations that would provide the answer. Along these same lines, elaboration of a subject will help in recall in that more paths to the information are developed.

Memory elaboration and inference take place both in memory storage and in recall. For instance, given the sentence "after hearing what the doctor had to say Strider was sad" and later asked to recall it subjects will often have inferred things, perhaps something like "after hearing the doctor’s bad news Strider was sad". The subjects have inferred that the news was bad because it made Strider sad. This inference can take place at the retention or recall stage.

There is much debate, but no answer to the question, whether forgotten memories are gone or whether the path to them are to weak for activation. Learning new material does not seem to "push out" old material but it does seem to interfere with it. This is true when the new material overlaps the old. As Sherlock Holmes puts it.

"Depend upon it, there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.[ Arthur Conan Doyle. 1904,p.16]" (1).

I had to get that quote in here as I personally try and avoid "useless facts" whenever possible.


Cognitive psychologists define two main types of knowledge. Declarative knowledge is knowledge of facts. Some examples of declarative knowledge are; the earth is a planet; the stars at night are big and bright deep in the heart of Texas; time, matter, and distance are all relative; and cognitive scientists rule. Procedural knowledge is knowledge about how to do things, for example, how to ride a bike, play music, or do a flip off a diving board. Declarative knowledge can usually be expressed verbally while procedural usually can not. AI has developed procedural formalisms that have served as models for how cognitive skills are represented. One of them, production systems models procedural skills. A production system is a set of rules, a working memory, and a control cycle that applies the rules to the working memory. The working memory is the "state" in the procedure. The rules are condition-action pairs and can be thought of as an extension of stimulus-response actions. Procedural knowledge starts out as declarative. There is a transition that occurs as the knowledge moves from being explicit to implicit.


Problem solving is any goal directed sequence of cognitive events. Cognitive psychologists have made a distinction between creative problem solving and routine problem solving. Creative problem solving involves some insight. A new solution, for you, is created. Routine problem solving is the subsequent application of those methods to the same or similar problems. The problem solver is said to be in states. The initial state would be the given conditions at the start of a problem. The intermediate states are subsequent states that may be reached by the solver. These may or may not lead to the goal-state. The goal-state is the desired state in that the goal has been achieved. All the states that are achievable are defined as the problem space. Functions that move from one state to another are called operators.

People choose which operator to apply in many ways. One way is similarity. If an operator will take you to a state that is similar to the goal-state, it is chosen. This is usually combined with a look-ahead idea in that a "bad" state might have to be entered to reach the goal-state. How you decide which states are good and bad is an example of Heuristics. Heuristics are "rules of thumb" that tend to lead one to the goal state. If one is trying to move a knight on a chess board to a particular position, one could "exhaustively" try every move available until the goal was reached, or one could employ a "proximity" heuristic and chose moves that end in a state which is closer to the goal state. The exhaustive search is an algorithm. The proximity rule is a heuristic. Algorithms are usually "tempered" with heuristics.

Newell and Simon have extensively used a method, to organize problem solving in their computer-simulations, means-end analysis. Means-end analysis is planing the search by what must necessarily be done before something else. Some states only can be reached from certain other states and so the operations must be ordered. Other techniques used in problem solving include working backward from the goal to the start-state, as well as problem reduction procedures.


When we add working procedures to solve a problem, to the problem space and operators, this is a production system. A production system to solve math problems would contain all the operators that can take us from one state of the problem to another. An interesting study found that the repeated use of a working procedure can "blind" us to other solutions. If a subject is given ten problems and the same working procedure obtains the answer in the first eight but not the last two problems, he typically will not be able to solve the last two, even though subjects that were only given the last two problems can solve them. The repeated use of a system that gained a solution "blinds" us to other approaches. This is related to the "incubation effect" of problem solving which is that often if a difficult problem is put aside for a while the answer just seems to "come to us". This is believed to be because after repeated attempts to solve the problem we have become fixed on particular methods we believe should produce a result, blinding us to other methods. Research in AI has contributed greatly to the field of problem solving.

"Newell and Simon (artificial intelligence researchers) have been the most influential workers on problem solving in the modern era"(1).



There are so many more subjects cognitive psychology is studying it would be a disservice to simply say there are many more subjects cognitive psychology is studying. There are a huge number of other subjects cognitive psychology is studying. Language would be prominent among them.

In the last several decades, cognitive psychology has undergone a union with computer science (AI research in particular), neural biology, philosophy, linguistics, and others in the relatively new discipline of cognitive science. In this new science I believe many new discoveries about the mind will take place. It is in cognitive science that my interest, as a computer scientist, lies. Here I will attempt to answer the age-old question. What is this thing "I"?





Cognitive Science

By George F. Luger Academic Press, INC 1994

Lectures by George F. Luger

Professor CS438


Lecture by Tim Goldsmith

Guest Lecture CS438


Cognitive Psychology and Its Implications

By John R. Anderson W. H. Freeman and Company 1980



By Stephen K. Reed Brooks/Cole Publishing Company 1996


Cognitive Psychology

By Ulric Neisser Meredith Publishing Company 1967




  1. Cognitive Psychology and its Implications

By John R. Anderson Freeman and Company 1980