The Learning Game
Vikaas S. Sohal
Fool me once, shame on you, fool me twice, shame on me. The old adage
points to what may be a human being’s most important asset: the ability
to learn. We as humans learn every day of our lives, and yet, when we try
to explain how we learn, we run into trouble. Pick up a baseball and a
bat, and practice your swing for long enough and one would hope that you
would eventually learn how to hit the ball. After watching a professor
do an example problem on the chalkboard (and after reading more examples
in the textbook and perhaps asking your TF to do yet more examples for
you), you ought to be able to reproduce the professor’s methods to solve
similar problems. We humans (and most of the animal kingdom) learn like
it was going out of style even though we have know idea how we do it.
Machines Do It Too
Machines are a little bit like human beings, in that they can sometimes perform some of the same functions as human beings. A human being who has completed a couple of years of elementary school can usually add and subtract. So can tiny microchips. The difference is that until the latter half of this century, human beings, unlike machines, could learn. Machines had to be built for a specific purpose, or occasionally, they could “learn” to perform new tasks, but only if a human programmer explicitly specified every step of those tasks.
But for the past thirty or so years, researchers have begun to find
ways for machines emulating the structure of the human brain to “learn.”
Although no machines are likely to begin enrolling alongside college students
anytime soon, they are beginning to provide models of human learning.
While many researchers have been diligently working to discover the molecular, cellular and anatomical features of the nervous system, others have begun studying how these features contribute to abilities such as learning, by using neural networks. The nervous system is made up of cells called neurons, and every neuron can be characterized by a level of activity, measuring the rate at which it undergoes electrochemical fluctuations known as action potentials. The activity in certain neurons, or the pattern of activity among groups of neurons, is thought to underlie the functions of the nervous system ranging from knee-jerk reflexes to cognition.
How do these patterns of activity come about? Each neuron sends connections to many neurons and receives connections from many other neurons. As a result, each neuron's activity is determined by the activity of other neurons, and each neuron's activity affects many other neurons’ activity levels. In short, the neurons in our brain really are connected up into a kind of network.
Neural networks are scientists’ attempts to capture this structure. A computer can store the values of the activity levels for many neurons and can modify these activities to simulate the effects of the connections between neurons. A neural network can also simulate the effects of changes in the external environment (such as the presentation or removal of a particular stimulus) by changing the activities of those neurons which depend on the external environment. A neural network is nothing more than a computer simulation of how the activities of a group of neurons change during time as they communicate and influence each other and as the environment changes. It is no different from a computer game of Risk which simulates the changes in groups of armies as they do battle, receive reinforcements, etc.
After setting up armies in their initial position, a Risk game evolves such that the particular configuration of armies on the Risk board determines the situation (victory for one player, conflict between two colliding armies, a retreat, etc.). Similarly, after giving the neurons in a neural network an initial set of activities to represent the “input” provided by the environment, the neurons influence each other, so that the pattern of activities in the network changes. The resulting patterns of activities might be said to represent the “output” the network has reached from the inputs. In terms of human beings, your math teacher might write a math problem on the board, eliciting some “input” activity in your brain. Presumably, some computations occur as the neurons in your brain communicate, and the new set of activities gives you an “output” — the correct answer or at least a good guess.
Two Classes of Learning Rules
Clearly, the output state reached by the neural network for a given input depends on the kind of communication which takes place between the neurons. So the only way the network can “learn” to return a different output state after starting with a given input state is by changing the way in which its neurons communicate — by making a neurons’ activity influence some neurons less strongly, and others more strongly, for example. The method by which a neural network determines how to change the influences of particular neurons on other neurons is called the learning rule in neural network jargon, and broadly speaking, most learning rules fall into two categories: supervised learning and unsupervised learning.
One way to think about the difference between supervised and unsupervised learning is to think about road trips. Really. Suppose you have two groups of friends, both of whom want to go on road trips. The problem is that know no one knows exactly how to get to a fun place. This problem is not entirely unlike that faced by a neural network. A neural network needs to “reach” an appropriate “output” starting from an “input” but it does not know how its neurons should influence each other to make this happen. After all, many different outputs would be possible depending on exactly how the neurons influence each other. Just as your friends need to find a way to get to a fun place, the neural network needs to find the right set of interactions between neurons.
Looking For Fun in All the Wrong Places (And One Right Place)
The first group proposes the following plan for finding a fun destination. From the town in which they start they will ask the locals in what direction to drive to get closer to Disneyland. Then they will drive in that direction until they reach the next town. At that town they will once again ask the locals in what direction to drive to get closer to Disneyland. Then they will drive in that direction until reaching another town and repeat the process again and again until they reach either Disneyland (and presumably have fun) or end up somewhere from which they can no longer get any closer to Disneyland (e.g., prison).
This method of finding the most fun destination is a very crude approximation to the types of learning rules which fall into the class of supervised learning. A neural network begins with a particular set of weights — weight simply refers to the amount of influence one neuron’s activity has on the activity of another neuron, so the set of weights describes the way in which all the neurons’ activities influence each other. There is a known set of input activities, and furthermore, there is a desired set of output activities which is known. The desired output corresponds to Disneyland in the analogy. Supervised learning methods compute which change in the set of weights (the ways in which neurons’ activities influence each other) decreases the difference between the output produced by the current set of weights and the desired output. Then, they make this small change to the weights. Based on the updated weights, the next appropriate change to bring the output closer to the desired output is computed and the process is repeated until the output is sufficiently close to the desired output, the same way that our hypothetical friend drove from town to town until finding a sufficiently fun town.
The key aspect of the analogy is that during each step of each process,
the system (our friend or the neural network) made a change which the system
knew in advance would bring it a little bit closer to the desired goal:
the network computes the change which would bring it closer to the desired
output and the friend asked locals about what direction to take next.
But these sorts of methods are not the only ones available to students eager for a fun road trip or neural networks seeking to generate an appropriate output for some given input. Imagine a second group of friends, who like the first, drives from town to town. These friends do not know what their final destination will be. And after reaching each town, this group does not ask the locals where to go next. Instead they fan out in every conceivable direction. Some directions lead to dead ends. Some lead back to where they started. Some lead to places which are less fun. But a particular direction turns out to be the most fun, and eventually, after exhausting all other possible options, the group travels in this direction, reaches the next town, and repeats the process.
This group of friends operates in a similar manner to neural networks using unsupervised learning rules. Simply put, no one tells an unsupervised network where to go next. At each step, there are many possible changes in the set of weights which the neural network could make, and each represents a step towards a different set of output activities. The network does not literally try every possible change before selecting one, but the change is chosen without the knowledge of what direction leads towards some “known, desired goal.” The exact criteria by which the appropriate change is chosen is difficult to explain non-mathematically, but those changes which lead to “dead ends,” i.e. which are not self-consistent or not reinforced, are not made whereas those changes which do meet those criteria are made. If each change in weight was like a flavor of ice cream, supervised learning might be like asking the person behind the counter what flavor to have, while unsupervised learning would be more like trying one taste of each flavor and then making a selection. The key aspect of unsupervised learning which differentiates it from supervised learning is that the changes in the weights which are ultimately made are borne out by experience and not based on advance knowledge.
Thus, although the two classes of learning rules (or two methods of road-tripping) may seem the same superficially, they are really quite different. First, supervised learning clearly would be expected to be (and usually is) much more efficient than supervised learning. After all, how many people do you know who road trip by driving town to town and fanning out in all directions before settling on the next leg of their journey? For this reason, supervised learning usually provides the learning rules of choice for engineers and many computer scientists.
Back to Humans
But there is an issue of human learning which hasn’t really been considered
since the opening paragraphs of this article. Since both supervised and
unsupervised learning take place in neural networks meant to model a real
population of neurons, it seems natural to ask whether either supervised
or unsupervised learning provides a model for human learning.
Turning for a moment to the physiology of learning, most talk these days are about the phenomena of long term potentiation (LTP) and long term depression (LTD). LTP and LTD are mechanisms by which the influence the activity of one neuron has on the activity of another neuron can change.
Recall that these very changes were at the heart of learning in neural networks. So examining how these changes occur among real neurons might shed some light on the relationship of learning rules for neural networks to actual learning.
What is most amazing about LTP and LTD is that the amount by which the changes the influence one neuron has on another seems to depend primarily on only two quantities: the activity levels of the two neurons involved. Recall that in supervised learning, the amounts of change made in the network’s weights (the weights are just numerical representations of the influence between neurons’ activities) depended on knowledge of the desired pattern of neurons’ activities and on a calculation of what changes in the weights would bring the neurons’ activities closer to the desired patterns. Clearly, this is much more knowledge than actual neurons seem to use when they change the amount of influence different neurons have on each other. Herein lies the main objection to supervised learning: that it is not biologically realistic.
This is not to say that supervised learning has no place in modeling the way human learning occurs. Although the methods supervised learning and biological systems use to change the ways in which neurons interact may be different, both supervised learning and real systems may still arrive at the same final interactions among neurons after learning to solve the same problem. And many researchers have suggested ways in which the information necessary for groups of actual neurons to change their interactions in a way consistent with supervised learning might occur. LTP and LTD need not be the only possible mechanisms for learning; it is just that other mechanisms have yet to be precisely described.
Nevertheless, in contrast to supervised learning, unsupervised learning seems to be possible using only the kinds of mechanisms for changing the interactions among neurons suggested by LTP and LTD. Unsupervised learning does not require excessive amounts of knowledge — actual experience takes the place of this extra knowledge. In the ice cream cone example, tasting every flavor before making your selection takes longer, but doesn’t require obtaining knowledge by asking someone else.
Lessons for People
So to sum up, on the one side there is supervised learning and efficiency, while on the other side there is unsupervised learning and (possibly) biological realism. In the future, new physiological mechanisms for learning and more possible learning rules for neural networks may be discovered, but for now, most researchers choose to pursue one or the other class of learning rules. Of course, none of this is likely to help any of you learn the material in any of your classes, but it might provide useful (if time consuming) ideas for what to do next spring break.
Hasselmo M.E. and Stern C.E. (in press). Linking LTP to network function:
A simulation of episodic memory in the hippocampal formation. In Long-Term
Potentiation, Vol. 3 . M. Baudry and J. Davis, eds. (Cambridge, MA: MIT
Rumelhart D.E. and Zipser D. (1986). Feature discovery by competitive learning. In Paralell Distributed Processing: Explorations in the microstructure of cognition, Vol. 1: Foundations. (Cambridge: MIT Press).
Vikaas S. Sohal '97 (firstname.lastname@example.org) is an Applied Math concentrator
in Leverett House.
Back To the Table of Contents
The Development of the Neuromuscular
The neuromuscular junction (NMJ) is a particularly interesting subject of study due to the wealth of information known about its structure and function. The NMJ is the only example of a synapse between a nerve and a muscle and plays an important role in successful motor control.
In developing mammalian muscle, multiple motor neurons innervate the same NMJ on each muscle fiber. During early postnatal life, most of these connections are retracted until each muscle fiber is innervated by one motor neuron. This process is called synapse elimination. In terms of biochemical processes, it is widely known that the main neurotransmitter released in the NMJ is acetylcholine (ACh), which is picked up at the post-synaptic membrane by acetylcholine receptors (AChR). The localization of AChRs has been achieved by staining with alpha-bungarotoxin, which binds to the AChRs and blocks them (Zaimis, 1976; Brumback and Gerst, 1984; Dowling, 1996).
The fine structure of the NMJ may be studied extensively using electron microscopy, which has revealed some of its finest details. Scientists have used this technique to study the successive steps of nerve-muscle differentiation to form the NMJ during development. The development of the NMJ begins in utero with an axon and its Schwann cell lying close to embryonic cells of muscle mass, which are characterized by a large nucleus, sparse cytoplasm, granular endoplasmic reticulum, many mitochondria, and polyribosomes (Fig.1, A). Gradually, clusters of axons approach the developing myotubes, which are characterized by glycogen granules, lipid droplets, and peripherally located myofibrils (Fig.1, B). Clusters of axon terminals eventually sit on the surface of the myotube, covered and separated from one another by a Schwann cell. The postsynaptic sarcoplasm is devoid of myofibrils, but has many mitochondria. Both the Schwann cell and the myotube have a fuzzy coat of basal lamina (Fig.1, C). Gradually, axon terminals on the surface of the myofiber become separated by processes of the Schwann cell. The post-synaptic membrane begins to form the characteristic folds, and the basal lamina now runs throughout the synaptic cleft (Fig.1, D). Finally, the mature NMJ forms when the post-synaptic membrane forms the primary synaptic gutter that engulfs the axon terminal. The prominent basal lamina extends throughout the cleft, including the depths of the secondary post-junctional folds (Fig. 1, E. taken from Brumback and Gerst, 1984).
Having described the morphogenesis of the NMJ, it is also important to consider some of the most recent research findings. In addressing the question of how pre- and post-synaptic differentiation are achieved, scientists have suggested that specific signals during development allow for the appropriate differentiation of the axon terminal and the post-synaptic muscular membrane.
Little is known about pre-synaptic differentiation, but the recent finding that omega-conotoxin-binding Ca++ channels are concentrated at the active zones of the axon terminal provides a molecular marker for the spatial organization of the nerve terminal (Jennings and Burden, 1993).
In terms of the post-synaptic differentiation, that is the differentiation of the muscular surface, there seems to be more information available. Studies of AChRs have shown that their clustering at the post-synaptic region may arise in two ways: a) pre-existing AChRs are redistributed to the synaptic sites, or b) new AChRs are synthesized locally from mRNAs located in the synaptic region. This process is called synapse specific transcription (Jennings and Burden, 1993).
Preliminary findings seem to indicate that the synthesis of new AChRs is linked to a process of local transcription that takes place in the synaptic nuclei, within the post-synaptic region. This synapse specific transcription seems to be initiated by signals received from the basal lamina. The basal lamina (Fig.1) is a formation that appears during the development of the NMJ, and it is thought to release bioactive molecules, such as the recently isolated agrin, when it goes through a process of conformational change. Agrin has been found to induce AChR clusters on cultured muscular tissue. Whether it is responsible for the synthesis or the redistribution of AChRs, as well as which other compounds are responsible for these processes, remains to be determined by further research.
Carry, M.R. and Morita, M. (1984). Structure and Morphogenesis of the
Neuromuscular Junction. In The Neuromuscular Junction. R.A. Brunback and
J. Gerst, eds. (New York: Futura Publishing Company, Inc.), pp. 25-64.
Drachman, DB. (1994). Myasthenia Gravis. The New England Journal of Medicine 330:1797-1807.
Jennings, C.G.B. and Burden, S.J. (1993). Development of the neuromuscular synapse. Current Opinion in Neurobiology 3: 75-81.
Munsat, T.L. (1993). Neuromuscular disease. Current Opinion in Neurology
Sanders, D.B. (1984). Acquired Myasthenia Gravis. In The Neuromuscular Junction. R.A. Brunback and J. Gerst, eds. (New York: Futura Publishing Company, Inc.), pp. 25-64.
Anna Greka '98 (email@example.com) is a Neurobiology concentrator
living in Lowell House. She has spent the last two years working on RNA
structures in an X-ray crystallography laboratory. This summer she will
be working on the molecular cloning of GABAC receptors in the mouse retina.
Back To the Table of Contents
Integration of Literature and Neuroscience...And Its Inhibitors
Ode to the Diencephalon
(after A. T. W. Simeons)
How can you be quite so uncouth? After sharing
the same skull for all these millennia, surely
you should have discovered the cortical I is
a compulsive liar.
He has never learned you, it seems, about
or fire or ploughshares or vines or policemen,
that bolting or cringing can seldom earth a
We are dared every day by guilty phobias,
nightmares of missing the bus or being laughed at,
but goose-flesh, the palpitations, the squitters
won’t flabbergast them.
When you could actually help us, you don’t. If only,
whenever the trumpet cries men to battle,
you would flash to their muscles the urgent order
Referring to W. H. Auden’s “Ode to the Diencephalon” provides me with a cheap defense of my plan for a combined thesis in literature and neuroscience. Casual doubters are usually satisfied with a few lines quoted from the poem: “How can you be quite so uncouth? After sharing the same skull for all these millennia, surely you should have discovered the cortical I is a compulsive liar.” As if the mere mention of neurological jargon in the work of a famous poet justifies the merger of two vastly disparate fields; as if a lyrical quip could merit the emergence of an ambitious new discipline. On the contrary, Auden is addressing neuroscience only to exploit it as metaphor. A few stanzas later, he complains that “When you could really help us, you don’t. If only, whenever the trumpet cries men to battle, you would flash to their muscles the urgent order ACUTE LUMBAGO!” Auden cares little for IA afferents or alpha motor neurons—his issue is purely abstract. Should not our instincts, he asks rhetorically, prevent us from killing each other? The philosophical question employs the neurological term absolutely—it is a one-way incorporation of neuroscience into literature without any degree of reciprocal concern. I mention the poem to defend my plan of study even while recognizing that Auden’s work is non-integrative. He uses neuroscience only as a parasite.
It is far more difficult, I have found, to find an occasion when neuroscience parasitically enters the field of literature. This is, in part, due to the utterly co-optive position to which literature relegates itself, especially in relation to the sciences. The self-description supplied in the Fields of Concentration deliberately places the Literature department at the wobbly crux of interdisciplinary study: “The concentration is designed to provide students with the opportunity to ask a number of fundamental questions. What is literature? What forms does it take in different social and historical contexts? What functions does it serve? What are its relations to other arts and disciplines?” The immediate and striking self-doubt of the first question elasticizes the entire field, while the last question explicitly potentiates an integration with all other studies. The Literature department defines itself by its willingness to co-opt or be co-opted. It suggests itself vaguely, as something defined only by the various perspectives with which it is studied—a protean pursuit whose form manifests specifically in relation to its context. The description of the Biology department dramatically echoes this sentiment but with a bizarrely constrictive postscript: “Some of the most important advances in modern biology have been made by those whose training and creativity have enabled them to synthesize disparate fields toward a new understanding of life processes. Biology is also an interdisciplinary subject. Elements of mathematics, chemistry, and physics are essential for an understanding of life processes and as investigative tools. Also, biology is an important component of other disciplines, ranging from medicine, psychology, and anthropology to geology, oceanography, and applied engineering.” Here we see a remarkable affirmation of the malleability of the field and of the merger of “disparate fields,” and then two explanatory sentences which restrict those fields in both the co-optive and the co-opted sense. In terms of the former, the description cites only the core program’s hard “A”-sciences as potentially incorporated into Biology, and in terms of the latter, only the softer narrative “B”-sciences can potentially subsume Biology. In neither case is the possibility of combination with a truly disparate field (like Literature or Philosophy) even considered. This undercutting restriction constitutes the integrative bias by the neurosciences against the humanities.
The integrative bias manifests in any attempted combination of the two disciplines. Such a combination usually involves the objectification of one discipline by another, where the objectified field is incorporated into the subjectified field; the co-optive field objectifies the co-opted. We shall consider both sides of the bias: the relatively easy incorporation of neuroscience into literature, and its more difficult inverse. Looking at the former case, where neuroscience is co-opted by literature, we see two potential methods of fusion: one can either analyze a text which adopts neuroscience as a sort of trope, like Auden, or one can attempt to examine neurological research from the perspective of a literary movement; the integration can therefore be either thematic or theoretical. We have briefly seen in Auden an example of the former method; a critical look at the famous neurological case study of Patient HM exemplifies the latter.
In 1953, an epileptic patient (“HM”) received an experimental operation in which his hippocampus and amygdala were removed from his brain bilaterally. The brain damage did relieve him of the worst seizures, but it also created a host of other problems, including a total anterograde amnesia. Left without the ability to remember anything that happened to him from that day forth, HM immediately became the subject of endless psycho-neurological studies. Jenni Ogden has written a compelling account of her work with HM in which she describes his bizarre condition. “Hours, days, weeks, and years of watching the same stimulus on a computer monitor and pressing buttons in response may not be tedious for him because he has no conscious awareness of ever having done the experiment (or any other experiment) before.” For HM, each experience in his life after the day of his operation is entirely new—he never notices the continual research projects which have comprised the remainder of his life. His special condition was of such scientific significance that an entire literature of HM-studies was created; “he has had more words written about him than any other case in neurological or psychological history.”
For these same reasons, the story of HM can be researched from a literary perspective. In particular, he provides an interesting model for the consideration of nineteenth century Aestheticism, a movement somewhat obsessed with models. For the Aesthetes, originality, depth, and novelty were the ultimate virtues; routine and monotony the principal foes. Nietzsche likens habit to suffocation in his Aestheticist treatise, The Gay Science: “Enduring habits I hate. I feel as if a tyrant had come near me and as if the air I breathe had thickened when events take such a turn that it appears that they will inevitably give rise to enduring habits.” And habits are broken by forgetting—Kierkegaard declares that “a person’s resiliency can actually be measured by his power to forget. He who cannot forget will never amount to much.” From this perspective, HM is an Aestheticist hero. He forgets everything and has no enduring habits. Furthermore, his position as a scientific construction reinforces his interest to the movement. The Aesthetes were also fascinated by notions of construction and experimentation—Oscar Wilde remarks in his most famous Aestheticist work that “it was clear to him that the experimental method was the only method by which one could arrive at any scientific analysis of the passions, and certainly Dorian Gray was a subject made to his hand.” The case study of HM, with his curious amnesia and scientifically constructed persona, could therefore provide the basis for the study of an entire literary movement.
In both formulations of literature incorporating neuroscience, we see that the relationship is indeed parasitic—the implications of the work remain exclusively within the co-opting field. Even working against the grain of the integrative bias—incorporating literature into neuroscience—yields the same result: the significance of the effort is primarily in literature, the objectified field. We see this peculiar, almost paradoxical relationship in the field of neurological linguistics, the primary site for this upstream fusion. More specifically, the work of Roman Jakobson demonstrates the unusual effect. In a 1956 article entitled “Two Aspects of Language and Two types of Aphasic Disturbances,” the Russian psychologist and linguist convincingly categorizes aphasias as affecting one of two axes of language, the metaphoric and the metonymic. The study showed the localization of these processes in terms of brain function, as well as their fundamental importance in a new theory of language. He goes on to reapply the idea of metaphoric and metonymic axes to their original sources in the study of literature, associating metaphor generally with poetry and specifically with Romanticism and Symbolism, and metonymy generally with prose and specifically with Realism. Jakobson incorporated literature into what was basically a neuro-linguistic study; from it he created a new perspective on literature and formed the basis for the Russian Formalist and Modern Structuralist schools of literary criticism. Jakobson’s was a classical study of aphasia; like Broca and Wernicke, he created a theory of language by evaluating various impairments. But by bringing a consideration of literature to his work, he was able to bring original insights to both fields.
Still, the impact of Jakobson’s study was much more intensely
felt within literature than within neuroscience. Despite his work to integrate
literature into neuroscience, and not vice-versa, the major significance
of his effort conforms to the integrative bias and adheres to the literary.
Even as the objectified discipline within an interdisciplinary study, the
bias towards literature remains. A dissonance of meaning comes out of these
one-sided efforts. The resulting clang and clamor makes one wonder how
a harmonious integration is possible with these fields. Only in simultaneity
could such a congenial combination occur. That is to say, only in a situation
in which both fields co-opt each other at the same time, objectify each
other, and incorporate each other simultaneously, can produce the dulcet,
mutualistic fusion of literature and neuroscience.
Fourteen months before he published “Ode to the Diencephalon,” Auden wrote and dedicated another poem to his old friend Oliver Sacks. In this poem, in the work of Sacks, and in the work of Sacks’ model A. R. Luria, we do see an example of this last form of integration, this harmony through synchrony. The poem, entitled “Talking to Myself,” and Luria’s novel The Mind of the Mnemonist reverberate the themesof consciousness and subjectivity from a truly interdisciplinary position. They do so through the specific examination of the surface of that interdiscipline — they till the fertile plane of simultaneous literature and neuroscience, and they create from it. This task requires a double-vision lacking in the one-sided efforts described above.
Luria achieves the double-vision through what he terms in his autobiography the “Romantic Science.” Here he establishes simultaneity: “Romantics in science want neither to split living reality into its elementary components nor to represent the wealth of life’s concrete events in abstract models that lose the properties of the phenomena themselves.” The romantic scientist explores the interior of this dichotomy, avoiding the dogmatism of isolationist neuroscience and expansionist literature. The interior marks the pure interdiscipline in which, as a colleague of his wrote, “Luria the ‘romantic’ narrativist was not only on good terms with Luria the ‘classic’ scientist, but the two of them were working hand in glove...”
In The Mind of a Mnemonist, these two Luria’s work together to provide the narrative case-study of S., a man who cannot forget. Perhaps more interesting than his limitless memory was his rare “synesthetic” sensitivity to the world which mixed sight, smell, taste, and sound into single impressions in his mind. “‘Lines,’ ‘blurs,’ and ‘splashes’ would emerge not only when he heard tones, noises, or voices. Every speech sound immediately summoned up for S. a striking visual image, for it had its own distinct form, color, and taste.” Luria makes it clear that understanding S.’s synesthesia was scientifically important—“from an objective standpoint,” he clarifies—but he also very deliberately connects synesthesia to poetry: “Poets, as we know, are extremely sensitive to the expressive qualities of sounds, and ... this ability was developed to such a degree that [S.] never failed to detect the expressive qualities of sounds.” Luria poetically describes the fantastic images that S. imagined when he heard even dreary daily conversation, and labels him “a dreamer whose fantasies were embodied in images that were all too vivid.” In order to fully understand S. as a valuable case-study in hypertrophied memory, Luria romanticizes him and evaluates him from a literary perspective. We see this further as Luria dwells on the synesthetic experience of literature, “dealing with S.’s responses to imagery, narrative prose, and poetry ... What effect did his figurative, synesthetic thinking have on his grasp of this type of material?” Luria asks. To create a rich portrait of an individual within his Romantic Science, Luria investigates the strange ways in which literature affected S., he examines synesthesia as a potentially poetic mode of experience, and he concludes his study with the judgment that S. lived in “the world of imagination,” but not that of a “creative imagination.” In this fully integrative and simultaneous text, Luria provides a more profound case-study than would be possible through monotonic scientific discourse, and illuminates literary themes with the unusual light of psychology. The graceful accord between the two fields in his work overcomes integrative bias and yields a new and valuable fusion.
Auden is able to achieve the same effect through more abstract means. “Talking to Myself” does not merely play with neuroscientific jargon, as in “Ode to the Diencephalon.” Instead, the relationship between poetry and neuroscience is integral to the work, dramatized in the poem as the bifurcation of the author into “I” and “You.” This split also analogizes mind and brain as well as conscious and unconscious:
For dreams I, quite irrationally, reproach You.
All I know is that I don’t choose them: if I could,
they would conform to some prosodic discipline,
mean just what they say. Whatever point nocturnal
manias make, as a poet I disapprove.
In this stanza, Auden investigates all three relationships
simultaneously. First we see the problematization of poetry with regard
to neural functioning, phrased by a poetic voice which earlier admits the
indispensability of the “You,” “but for whose neural instructions I could
never acknowledge what is or imagine what is not.” In this sense, the poetic
“I” engages its own limitation and explores it in the context of a poem.
Secondly, the conscious mind identifies dreams as originating in the unconscious
workings of the brain while maintaining the desire for interpretation.
The mind is posited without abandoning a sense of the biochemical processes
which are somehow a part of it. Auden uses poetry to examine fundamental
neurological issues like the mind-body problem; he uses metaphor and polysemy
to provide insight unseen in raw technical discourse.
Auden continues the process by referring to poetry and neuroscience as a couple whose “marriage is a drama, but no stage-play where what is not spoken is not thought: in our theatre all that I cannot syllable You will pronounce.” This last phrase beautifully accounts for the symbiosis of the fields; they are complementary parts of total expression—when one field falls silent, the other can finish the sentence. The effect of this completion is an illuminating unity, a simultaneous integration which allows the dialectics of poetry and neuroscience, mind and brain, conscious and unconscious to be examined with more depth than could possibly be attained with a fragmented perspective.
“Seldom have you been a brother,” Auden writes. The sentiment is accurate. While literature has been quick to adopt neuroscience as, perhaps, a servile child, a notion of true brotherhood has been rare. By allowing the fields to adopt each other, Auden and Luria create this brotherhood of disciplines. Luria sees the integration as a new form of psychology, one “that is capable of dealing with the really vital aspects of human personality.” He looks to the future and predicts that his Romantic Science will play a role. For Auden the stakes are even higher: “I’m scared of our divorce: I’ve seen some horrid ones.”
Auden, W. H. (1976). Collected Poems. Edward Mendelson, ed. (New York: Random House).
Jakobson, Roman and Morris, Halle. (1971). Fundamentals of Language. (2nd rev. edn.) (The Hague: Mouton).
Kierkegaard, Søren. (1987). Either/Or, Part I. Howard V. Hong and Edna H. Hong, eds. (Princeton: Princeton University Press).
Luria, A. R. (1987). The Mind of a Mnemonist. Lynn Solotaroff, tr. Foreword by Jerome Bruner. (Cambridge: Harvard University Press).
Nietzsche, Friedrich. (1984). The Gay Science. Walter Kaufmann. tr. (New York: Vintage Books).
Ogden, Jenni A. and Suzanne Corkin. “Memories of H.M.” Memory Mechanisms: A Tribute to G. V. Goddard. Eds. Wickliffe C. Abraham, Michael C. Corballis, and K. Geoffrey White. Hillsdale: Lawrence Erlbaum Associates, 1991.
Wilde, Oscar. (1985). The Picture of Dorian Gray. (New York: Penguin Classics)
Dan Engber '98 (firstname.lastname@example.org) is a Literature concentrator in Winthrop House. Hailing from New York, Dan has suffered many illnesses over time, but he has never once been a victim of ACUTE LUMBAGO.
Back To the Table of Contents
A look at the laterality and specificity issues using evidence from neuropsychology
“Prosopagnosia,” which is derived from the Greek words for “face” (prosopon) and “not knowing” (agnosia), is a condition that occurs as a result of an impairment in the medial occipitotemporal cortex of the human brain. Simply put, it results in the loss of ability to recognize and identify familiar faces. Damasio (1982) defines it succinctly in the following manner:
Patients with prosopagnosia only know that a face is a face and name it as such. Also, they can name parts of the face and point to them. Yet they are unable to recognize a given familiar face; i.e. they do not know to whom it belongs and consequently are unable to name it.
The earliest cases of prosopagnosia date from the late nineteenth century, as its symptoms were described by Wilbrand in a paper first published in Deutsche Z Nervenheilkd in 1892. However, the term “prosopagnosia” was not coined until 1947, when another German psychologist, Joachim Bodamer, described an injury to a 24-year old man who suffered a bullet wound to the head. The man mysteriously lost his ability to recognize his friends and family, and even his own face in the mirror (Szpir 1992). He was, however, able to recognize and identify them through other sensory modalities such as auditory, tactile, and even other visual stimuli patterns (such as gait and physical mannerisms). What he had lost, and quite, specifically, was his ability to recognize a face.
Since then, over 140 cases of prosopagnosia have been reported, as of 1992. Researchers have distinguished prosopagnosia from facial agnosia, since the former applies to familiar faces whereas the latter applies to any face, familiar or not. Interestingly enough, the familiar face does not have to be that of a human; Assal (1984) and Bruyer (1983) both mention cases where recognition of animals’ faces are impaired. Current opinion, however, regards these two impairments as actually part of the same overall problem (Bruce and Young, 1986).
The field, however, still remains riddled in other controversies
that are deep-rooted and have extensive histories; the issues are not likely
to be decided anytime soon, but recent research has at least progressed
towards a resolution. The main questions to be resolved are the following:
1) Is prosopagnosia caused by unilateral or bilateral damage to the cerebral hemispheres?
2) Is prosopagnosia characterized by a failure in episodic memory components or perceptual mechanisms?
3) Is a specific region of the brain allocated to the so-called “face cells?”
This paper will present a number of answers to these questions, and analyze why the controversies still remain heated.
The Studies from Neuropsychology
In 1962, Hecaen and Angerlergues published a paper which claimed that patients diagnosed with prosopagnosia frequently have impairments in the left visual field, which correlates with damage in the right hemisphere. Twenty-two patients were studied, and sixteen were found to have damage in the right hemisphere, four with bilateral damage, and only two with left hemisphere damage (Benton, 1980). Their findings were replicated in a study by De Renzi et. al. (1968), who administered face recognition tasks to 114 subjects with unilateral brain damage. De Renzi’s study had initially set out to question two issues: 1) whether face fragments could be presented alone and still be identified correctly when asked to match with the whole picture, and 2) whether impairment of facial recognition was primarily a memory or perceptual deficit, as tested in a delayed memory task.
In the face fragment tasks, three experiments were conducted. First, patients were asked to match different parts of faces with the corresponding whole face image. Second, patients were asked to match the profile with the front view. Finally, an immediate memory test was conducted, where the patient was shown a photograph which was promptly removed, and then asked to identify the face he just saw among twelve different faces on display. De Renzi’s experiments confirmed Hecaen’s findings, showing that out of the 114 unilaterally damaged patients, the group that performed the poorest on the face fragment tasks were the nineteen patients with right brain damage and deficits in the corresponding left visual field. Furthermore, the fact that those patients with no damage in the left visual field (yet with damage to the right hemisphere) did better than those with a left visual field deficit led De Renzi and colleagues to the conclusion that recognition impairment of unfamiliar faces was most likely a perceptual deficit. They reasoned that facial recognition impairment in right brain damaged patients was due to the failure to process small subtle differences and to integrate visual cues.
In De Renzi’s second experiment, subjects performed a delayed memory task. The purpose of this experiment was to determine whether impairment of facial recognition was primarily a memory or perceptual deficit. A subject was shown a photograph of a face for a period of time; the picture was then removed from the subject’s view, and then the subject was asked to identify the face among a group, either immediately or with a minute delay. The reasoning behind the experiment was that if a memory component were crucial for face recognition, then a delayed memory task should give the greatest amount of impairment. The results of the test, however, showed no significant differences in correct responses between the tests. Thus, the claim that right temporal lobe damage and the facial recognition impairment associated with it was due primarily to memory was discredited; the experiment provided further support for De Renzi’s claim that the impairment in prosopagnosia was a perceptual one.
The notions of right hemispheric dominance in facial processing, however, were challenged in a paper by Damasio, Damasio, and Van Hoesen in 1982. In the post-mortem analysis of the patients cited in the original studies by Hecaen and Angerlergues, Damasio and colleagues discovered that all of the subjects had bilateral lesions. They claimed that those were “silent” lesions in the medial occipitotemporal regions which did not appear as overt visual defects because they were not detected in the initial clinical neurological tests. As Martha Farah (1990) notes, silent lesions present in the homologous region of the left hemisphere are usually smaller than their right hemisphere counterparts, thus making detection even more difficult. These silent lesions could also produce deficits in recognition and identification (Damasio 1985). Thus, according to Damasio, the tests developed from the early days in neuropsychology were not reliable enough to determine the extent of laterality in prosopagnosia, since posterior cerebral lesions fail to produce visual field defects. In Damasio’s opinion, silent lesions are the most likely cause of prosopagnosia when it appears that patients may have only unilateral damage. Therefore, it is likely that the damage is bilateral.
Based on his evidence that prosopagnosia requires bilateral lesions, Damasio argues that the deficit in facial recognition is one in which the memory processing is paramount (Damasio, 1985). Memory is not specially allocated into the separate hemispheres, whereas perception is. Benton (1990) agrees with this proposition, and offers two hypotheses as to why prosopagnosia is more than just a perceptual deficit. First is the notion that prosopagnosics suffer from recognizing individuals within a single class while noting no manifest deficit in recognizing the class itself. Second, Benton postulates that perhaps prosopagnosia is a “material-specific defect in memory” (Damasio 1982), for the problem lies in connecting the present facial stimuli with past representations and experiences. Damasio et al (1982) describe recognition as the “combined evocation of pertinent multimodal memories that permit the experience of familiarity with a given stimulus.” When we cannot recognize an individual, it is the inability to evoke pertinent nonverbal and verbal memories, the inability to give an appropriate response to a stimulus, that fails us.
What could be the possible reasons as to why the right hemisphere may be more efficient at face processing than the left? There is some physiological evidence that there are more neurons responsive to only face stimuli in the left1 hemisphere of the rhesus macaques (Perrett, Mistlin, and Chitty, 1987). Results from visuospatial tasks administered to monkeys confirm this anatomic left hemisphere bias (Jason, Cowen, and Weiskrantz, 1984). With such findings in monkeys, it is plausible to find anatomical correlates in humans as well.
Furthermore, Tovee and Tovee (1993) offer a simpler and yet ingenious reason as to why a right hemisphere dominance for face processing exists. Instead of asking the question of why the right hemisphere has special face processing abilities, they approach the problem from the viewpoint of why the left hemisphere somehow could not have the same abilities. Considering the fact that it has long been established that the left hemisphere subserves language, it seems very logical that the development of these areas in the left have restricted the availability of more cerebral space for other specializations. Therefore, language areas in the left temporal cortex are approximately mirrored in the right by the face processing units. This theory is particularly interesting because it places language processing and facial recognition on approximately equal levels of cognitive importance, giving new ideas as to how these faculties first developed.
The current most popular theory of face processing is that proposed by Bruce and Young (1986). They propose three stages to facial processing: the structural encoding stage, the physiognomic invariant stage, and the familiarity decision stage. In the first stage, the raw visual data enters the striate cortex and gets processed as a face. Basic features of the image, such as orientation, color, and position, are processed to give the image of a face, i.e. “what am I looking at?” In the second stage, the question of whether the face has a physiognomic invariant is asked. This means that we ask whether or not the face is one that is familiar. What makes the face unique to the individual is perceived at this stage, e.g. generic class versus individual. In the third stage, the familiar face is processed with all the episodic, semantic, and emotional memories, i.e. biographical data necessary to identify the person. Prosopagnosia can arise as a result of the damage at any one of these stages. Some people may not be able to recognize faces as faces, while others cannot recognize familiar faces (the most traditional notion of prosopagnosia), while still others can admit that the person seems familiar, yet strangely enough, they do not know anything about them, since the “road” accessing that particular memory in the visual association pathway has been destroyed.
The Studies from Neurophysiology
Nowadays, we have a good idea of how the visual system works, due largely to the work of Hubel and Wiesel from the 1950’s. In 1959, Hubel and Wiesel reported in the Journal of Physiology their data from single cell recordings in the cat striate cortex, which revealed the presence of neurons that had specific response properties to visual stimuli with bars or edges (Perrett and Rolls, 1983). Their model for visual processing is one in which there was a hierarchy of cells that were sensitive to more and more complex stimuli the further one went along the striate cortex pathway.
Early on in the visual pathway, simple cells in the occipital cortex resolve an image based on simple stimuli, such as bars or slits of light, whereas further along in the pathway, more complex cells respond to more sophisticated stimulus patterns, such as bars of particular length or orientation (Perrett and Rolls 1983). At the level of the visual association cortices, then, the image gets resolved into finer and finer components that have a more precise and accurate representation of the visual stimulus.
Once visual information reaches the primary visual cortex, it is processed further along two pathways (Bloom, 1988). One pathway deals with determining “where” something is in the visual space, and it travels along the superior visual association cortices, i.e. the occipitoparietal region (Damasio, 1985). The other pathway deals with “what” something is, and it is involved with the recognition and identification of visual information. This second pathway is the one primarily involved with prosopagnosia, and it is located along the occipitotemporal region.
In terms of Bruce and Young’s visual processing model, the superior temporal sulcus located in the occipitotemporal lobe represents the final physiological correlate of Stage 2, where physiognomic invariants are assessed and the determination (i.e. is this a familiar face?) of a face made. Not only are Stage 2 correlates of Bruce and Young’s model seen at the neuronal level, but they are also seen at the level of brain areas. Justine Sergent (1992) conducted tests of facial recognition while using positron emission tomography (PET) and magnetic resonance imaging (MRI) to show evidence of specific brain areas that performed facial recognition processing. When Sergent compared the PET scans of prosopagnosics with normal patients, she was able to locate three prominent regions in the occipitotemporal lobe that “lit” up when processing a familiar face, that did not light up in the PET scans of prosopagnosics. She concluded that the three regions could represent anatomical correlates to Bruce and Young’s model. One region in the posterior occipital region was thought to be involved with the perceptual operations of extracting unique facial features. Slightly anterior to the first region is another region which is involved in making connections between the face and biographical information. Slightly anterior to the second region, located in the temporal lobe, is the third region, which acts like a file cabinet that contains all the biographical data (Szpir, 1992). These three regions correlated with the perceptual operations stage of visual grouping (Perrett et al., 1987), the structural encoding stage, and the biographical memory stage (Bruce and Young, 1986).
The fact that there are clusters of “face” cells in various brain components along the occipitotemporal pathway indicates that instead of adopting the previously popular idea of a specific “face” region in the brain, it would be more appropriate to assign a specific face “pathway” which is a distinct part of the overall visual processing pathway. But as Perrett sternly warns (1988), we do not in any sense have proof for a brain area exclusive for facial processing, only that there exists a facial processing subsystem within certain brain areas.
Conclusion and Remarks
Converging evidence suggests that there may be a face “pathway” in the occipitotemporal region. Controversy remains, however, as to the nature of the impairment in prosapagnosic patients and the issue of laterality.
Prosopagnosia: A Deficit in Perception or Memory?
Damasio cites evidence that patients with prosopagnosia have deficits for other complex visual stimuli. For example, some patients are unable to recognize familiar animals or buildings. However, there is contrary evidence to this notion. Perrett et al. (1988) cite evidence provided by Assal (1983) and Bruyer (1984), whose patients were able to recognize familiar complex stimuli, further supporting the notion that prosopagnosia is not solely associated with damage to the third stage of Bruce’s model, i.e. the biographical memory retrieval components. Prosopagnosia, according to Damasio (1985), is the impairment of the recognition process at the stage where specific identification of a member within a group is required. As the following evidence shows, in some cases, Damasio’s contention does not hold.
Assal’s patient was a farmer who could individually discriminate his own cows, but was not capable of distinguishing his own friends. And Bruyer’s patient, also a farmer, had the exact reverse of this condition, where he was able to differentiate among and correctly identify his friends but not do this his own cows. Does this mean that we can infer a certain “cow” pathway? The answer is both yes and no. To the farmer, the face of the cow has a specific biographical representation in the association cortices. Thus the cow’s face was also part of the general “face pathway” because of the associations built with it. For a person who does not associate with cows everyday, there would be no significant representation stored in the association cortices that would elicit a response upon the sight of a cow.
There is record of another patient, C.K., as described by Gurd and Marshall (1992), who can recognize famous faces and match unfamiliar faces, but cannot distinguish between a feather duster and a dart. These are cases where prosopagnosia occurs with visual object agnosia. Therefore, these findings seem contrary to Damasio’s claim that prosopagnosia is not a perceptual problem but a problem with integrating visual information with contextual (episodic) memory. Rather, it seems to be a deficit where both perception and memory are compromised in different proportions, depending on the specific case involved.
One may argue that prosopagnosia is primarily an impairment in the biographical memory stage, but Damasio’s argument uses the claim that the perceptual mechanismsin prosopagnosics appear normal to make its case stronger (Damasio, 1985). As he showed, two of his patients with prosopagnosia were able to discriminate complex visual stimuli such as an owl or an elephant, whereas mistaken identities were made between rather simple visual stimuli such as an exclamation point or a dollar sign. The fact that the complexity of the stimuli had little to do with the identification errors supports his view that the deficit is one where a generic class versus individuals cannot be distinguished. But relevant studies done by Perrett et al. show that cells in the superior temporal sulcus are definitely sensitive to changes in orientation and profiles of faces, whereas emotion is not a critical variable. Thus we can infer that these cells are most likely involved in visual analysis and not other higher processing schemes. Why is this finding so important to the perceptual argument? First of all, it would be meaningful to mention the specificity of these cells were they located in the striate cortex, for there the complex visual cells are expected to and should respond to color, orientation, and position. Thus these cells in the striate cortex do not pose an argument for Damasio’s claim at all.
According to the model of a face pathway, prosopagnosia, depending on the nature of the damage, can involve facial recognition impairments in general, and not just the inability to recognize familiar faces. The bilateralist’s view of prosopagnosia as mainly a biographical information retrieval and integration problem seems too confined because it only describes the last stage of facial processing.
Prosopagnosia and Laterality
The laterality issue still seems to have conflicting arguments, namely along the lines of whether or not bilateral lesions are required for complete prosopagnosia as opposed to unilateral lesions for certain aspects of prosopagnosia. De Renzi (1994) made claims that in a PET study of three patients, all with right hemisphere unilateral damage, prosopagnosia was manifest. Perrett’s finding of the asymmetrical distribution of face cells, with the left hemisphere processing more such units, sheds light on the fact that facial processing definitely relies more on the left hemisphere (for macaques, with the implication that it is the right hemisphere for humans) as opposed to equally on both hemispheres.
So after all these arguments made by different researchers, we ask ourselves again why we even care about the laterality issue in prosopagnosia. The reason is that because the brain is asymmetric, it is possible that its hemispheres work via different mechanisms, even though the end result may be the same. So, in our case, a possible resolution to the laterality controversy is to state that both hemispheres play a role in facial processing, but the mechanisms they utilize allow for a more efficient processing in one hemisphere over the other. Perrett’s finding of an asymmetric distribution of face cells may explain why Damasio found larger left hemisphere lesions, and yet the right hemisphere plays the key role in face recognition, since it is the right hemisphere that has a greater proportion of specific face cells. Perrett et al. (1988) propose the finding that monkeys may either process faces by piecemeal fashion, dissecting individual features of the face and encoding that information to arrive at an identity. For humans, this would correspond to the approach taken by the left hemisphere. However, as studies (Perrett and Rolls,1983; Perrett et al., 1988; Tovee and Tovee, 1993) show, the right hemisphere in humans is specialized for structural configurational processing and thus process complex visual stimuli much more efficiently than would by the piecemeal method. This statement for right hemisphere feature configuration versus left hemisphere piecemeal manner processing has been proposed foremost by cognitive neuroscientists as well (e.g. Carey and Diamond, 1977; Yin, 1969; Hillger and Koenig, 1991). Damasio (1982) states that template formation of a visual image and template matching may be more global in the right hemisphere, and that these differences in storage and retrieval between the left and right hemispheres may be biologically advantageous. The rare instances where individuals with unilateral damage have complete prosopagnosia may be due to individual differences in the relative strength of the hemispheres. Sergent states the best conclusion on laterality and facial processing. Sergent (1994) states that facial processing is “a bilateral, yet asymmetric, involvement of the cerebral hemispheres.”
So what we have gained to date are the following:
1) Prosopagnosia can and should be seen as both a perceptual and memory deficit, depending on which stage visual processing the lesion occurs.
2) The right hemisphere is the more efficient at face processing, in order for a complete loss of facial processing, a bilateral lesion must occur.
3) The notion of face specific cells may be substantiated, but prosopagnosia is not limited to just human faces; rather, it is the individual associations that trigger the appropriate responses.
4) The notion of a face pathway more accurately describes the broader framework of prosopagnosia and visual associative agnosia, for it is not the specific regions within certain pockets of brain areas that are selective for faces, but a clear and visible path (as seen in latency time studies) along the entire occipitotemporal region.
Assal, G., Faure, C., and Andreas, J.P. (1984). Revue
Neurology (Paris), 140: 580-584.
Benton, A. (1980). The neuropsychology of facial recognition. American Psychologist 35(2): 176-186.
Bloom, F.E., and Lazerson, A. (1988). Brain, mind, and behavior. (New York: W.H. Freeman & Company).
Bruce, V., and Young, A. (1986). Understanding facial recognition. British Journal of Psychology 77: 305-327.
Damasio, A.R. (1985). Prosopagnosia. Trends in Neuroscience 8: 132-135.
Damasio, A.R., Damasio, H., and Van Hoesen, G.W. (1982). Prosopagnosia: anatomical basis and neurobehavioral mechanism. Neurology 32: 331-341.
De Renzi, E., Faglioni, P., and Spinnler, H. (1968). The performance of patients with unilateral brain damage on facial recognition tasks. Cortex 26: 18-33.
De Renzi, E., Perani, D., Carlesimo, A., Silveri, M.C., and Fazio F. (1994). Prosopagnosia can be associated with damage confined to the right hemisphere - an MRI and PET study and a review of the literature. Neuropsychologia 3( 8): 893-902.
Gurd, J.M., and Marshall, J.C. (1992). Drawing upon the mind’s eye. Nature 359: 590-591.
Hecaen, H. and Angerlergues, R. (1962). Agnosia for faces . Soc. for Info. Disp. Arch. Neurol. 7: 92-100.
Hasselmo, M.E., Rolls, E.T., and Baylis, G.C. (1989). The role of expression and identity in the face-selective responses of neurons in the temporal visual cortex of the monkey. Behavioural Brain Research, 32: 203-218.
Jason, G.W., Cowey, A., and Weiskrantz, L. (1984). Hemispheric asymmetry for a visuosspatial task in monkeys. Neuropsychologia 22: 777-784.
Leonard, C.M., Rolls, E.T., Wilson, F.A.W., and Baylis, G.C. (1985). Neurons in the amygdala of the monkey with responses selective for faces. Behavioural Brain Research 15: 159-176.
Meadows, J.C. (1974). The anatomical basis of prosopagnosia. J. Neurol, Neurosurg. Psychiat. 37: 489-501.
Perrett, D.I., Mistlin, A.J., and Chitty, A.J. (1987). Visual neurones responsive to faces. TIN 10( 9): 358-364.
Sergent, J. (1995). Hemispheric contribution to face processing: patterns of convergence and divergence. In Brain Asymmetry. R.J. Davidson and J. Hugdahl, eds. (Cambridge: MIT Press), pp.157-182.
Szpir, M. (1992). Accustomed to your face. American Scientist 80: 537-539.
Tovee, M.J. and Cohen-Tovee, E.M. (1993). The neural substrates of face processing models: a review. Cognitive Neuropsychology 10(6): 505-528.
Whiteley, A.M. and Warrington, E.K. (1977). Prosopagnosia: a clinical, psychological, and anatomical study of three patients. J. Neurol. Neurosurg. Psychiat. 40: 394-430.
1 In rhesus macaques, the hemispheric specializations appear to be reversed than it is for humans. Thus in humans, face processing seems to involve a strong right hemisphere component, as to general spatial and configurational judgments in general (Perrett et.al. 1988). However, this finding should not be surprising, for hemispheric asymmetries evolved for the sole purpose of specialization of different and specific types of information processing. There is no dogmatic reason why one hemisphere should be favored over the other. For a more detailed account on the evolutionary basis for cerebral lateralization, see Hiscock and Kinsbourne’s article in Brain Asymmetry (1994).
Mike Takamura '97 (email@example.com) is a Winthrop House resident concentrating in Cognitive Neuroscience.
Back To the Table of Contents
Psychologizing of Religion in William James' Varieties of Religious Experience
In the Varieties of Religious Experience, American psychologist and philosopher William James (1842-1910) embarked on a uniquely empirical treatment of religious phenomena. Presented as the Gifford Lectures of 1901-1902 at Edinburgh, Varieties addressed religion’s usefulness in human experience. Amidst debates on the validity of religion in a world of physiology and philosophy, James attempted to prove the validity of religious experience for the individual and to create a new “Science of Religion” to compile and interpret those experiences empirically. Shunning a priori arguments for the Absolute and yet remaining open to the reality of supernatural external forces, his method lay in compiling religious experiences as “facts” and then unifying them in psychological theory. Contemporary theories of the subconscious informed James’ psychological approach. While he drew upon Janet and Binet’s work on the subconscious for scientific grounding, the “Subliminal Self” bythe British theorist Frederic Myers provided James with a ladder with which to climb from abnormal experience to religious insight. Reflecting on his scientific analysis of religious experience, James concluded his lectures with a look at the broader personal and social implications of taking an empirical approach to a topic which had previously confounded science.
James lived in an era permeated by science. As the designated authority on what to believe, science wielded a tremendous amount of power and influence over popular opinion. Rather than trusting their instincts or emotions, people were encouraged to base their beliefs on scientific fact established by thorough experiment. According to John Herman Randall, a contemporary of James,
... we are living in a scientific age, which seeks to base all its conclusions not on conjecture or faith, but on facts, just so far as these can be discovered. And the demand of such an age, in religion as well as everywhere else, is for certainty, not for mere pious conjecture or an only imaginary hope. If we are to believe today, the demand is for clear evidence that our beliefs are founded on facts, not on delusions. (Randall, 2)
For James and other contemporary intellectuals, science was the only instrument through which their beliefs, as well as their theories, could be validated. Thus, James approached Varieties as a scientific treatise addressing traditionally unscientific phenomena.
Unfortunately for James, empirical science was not willing to address such realms of phenomena. In his essay“The Hidden Self,” James asserted that the goal of science was a “closed and completed system of truth” (Allen, 90). In other words, any phenomenon which did not fit within the realm of “clean” or reductionist science was not to be admitted as evidence. In his tribute to Frederic Myers, James again described the problem, this time distinguishing between the “classic-academic” and the “romantic” investigator. He attributed the former tendency to all psychology up to the time of Myers, which had a “fondness for clean pure lines and noble simplicity in its constructions.” According to James, those “clean” conceptions of nature “missed the native quality of existence” (Perry, 156). The romantic, such as Myers, however, searched beyond the pretty picture to the “fantastic, ignoble, hardly human, or frankly non-human” psychological phenomena (Memories , 148-149). Although ignored and even despised by science, these sketchy phenomena of the “Unclassified Residuum” gave much hope to James (Allen, 90).
James had always entertained a sympathy for the speculative or despised realms of the “Unclassified Residuum.” He actually enjoyed playing big brother to neglected or detested realms of knowledge, and he often served as the equalizer in conflicts where science was seen as the aggressor. Referring to adherents of science narrowly defined, James lamented, “With these persons it is forever Science against Philosophy, Science against Metaphysics, Science against Religion, Science against Poetry, Science against Sentiment, Science against all that makes life worth living” (Perry, 30-31; 155-156).
Having amassed a store of descriptive phenomena as prescribed by science, James then turned to the assertion of their authenticity. Even if all these religious phenomena occurred, how could science be sure that these were not fraudulent imitations ingeniously created by overzealous mystics? Admittedly, religious accounts were not always reliable. The religious witness tended to shape the experience around the ideal which he or she considered the most significant. Still, James could speak confidently about these ideal accounts because experiences all pointed to the ideal (Perry, 338). Furthermore, James argued, what right had science to question literal truth, when so many scientists themselves compromised authenticity for the “larger truth?” For example, at public lectures a scientist would contrive an experiment which would otherwise fail in order to illustrate a truth about nature; yet, the scientist’s honesty was not questioned for such an imitation of nature (Memories 181). James thus attempted to establish the factual truth of religious experiences. This constituted the “existential judgment” which James asserted an investigator must make, that is, questions of historical fact concerning the nature, origin, and history of the phenomena. “Spiritual judgment,” which involved a determination of value and questions of meaning and significance, was yet to be made (Varieties 13).
James attempted to defend mystical and psychic experiences from attacks against their unromantic and ignoble nature. According to James, most people naturally associated exalted origins with divine importance, which made spiritual value dependent upon high causes. By asserting that a hallucination was the effect of the divine on the brain, supernatural importance was attached to the hallucination itself. For James, however, this contingent relationship was not necessary. In fact, a group of skeptics under the head of medical materialists employed an analogous argument to deny the validity of religious experiences. Rather than invoking the divine meaning from the origins of religion, the medical materialists dismissed spiritual encounters by associating them with malfunctions of the physical body. What the religious subject felt as a divine encounter was considered no more than a pathological manifestation. Thus, St. Paul’s revelation on the road to Damascus was invalidated as an epileptic fit. James inverted this argument back upon the medical materialists and asserted that scientific theories were just as much a manifestation of organic tendencies as religious fervor. Hence, unless a value were given to the physiology itself, the value of the experience could not be determined by causation (Varieties 19-24). James safely assumed that no scientist would assert that a certain physiological function had value — that thin blood was a sign of religious truth or that a stomach ache represented bad science.
James established this basis of judgment from the outset of Varieties in order to open the minds of his audience to the lessons to be learned from the morbid and pathological states which he was about to present. The displeasing nature of mystical and religious experiences should not condemn their study to morbid speculation: “To reject [such research] for its unromantic character is like rejecting bacteriology because penicillium glaucum grows on horse-dung and bacterium termo lives in putrefaction” (Memories, 186). He attributed extreme religious experiences to psychological pathology, but he did not want to disprove the validity or diminish the value of them for his audience merely because of their origin. James chose to study extreme cases of religious feelings and mystical encounters precisely because of their similarity to pathological experiences. In the tradition of empirical medicine, he planned to use abnormalities to help explain the meaningfulness of “normal” mental life. In fact, he argued that all of us exhibit some neurotic tendencies and that “a life healthy on the whole must have some morbid elements” (Taylor, 15).
To further relax his audience’s anxiety at such morbid phenomena, James attributed the quality of genius, in both religious and secular realms, to neurosis. Such abnormal tendencies were necessary for a person to act on his religious or otherwise creative visions. While a normal person asked, “What shall I think of it?”, the neurotic asked, “What must I do about it?” (Varieties, 29). It was this active response to the religious experience which made the pathological cases so instructive. By arguing thus, James hoped to convince his audience that they should not judge religious phenomena by their origin or by their morbid nature. If they could not judge experiences by their origin, then how could they judge them? James’ general pragmatic answer to this was, “By their fruits ye shall know them, not by their roots” (Varieties, 26).
Psychology of the Subliminal
As James wrote in his article “The Hidden Self,” the French psychologists studying the subconscious provided much of the content of his psychology. French psychologists such as Pierre Janet and Alfred Binet were leading the way in studying “abnormal personal peculiarities” such as hysteria, automatism, and other psychic phenomena. He described the theory of Janet as presented in De l’Automatisme Psychologique (Binet had independently arrived at a similar theory). According to Janet, hysteria was caused by a contraction of the field of consciousness. Unlike the person with normal consciousness, the hysteric could absorb only so much into his or her consciousness. Because of this monoideism or smaller field of consciousness, the hysteric was forced to block out parts of normal consciousness. This explained the symptoms such as hysteric blindness or anesthesia in certain parts of the body. Janet compared the state of hysteria to that of a normal person during hypnosis, in which the consciousness was allowed to focus on only a narrowed part of the usual field ( Allen, 93-94).
An important discovery for James was the multiple personalities of some of Janet’s patients. James described Janet’s discovery of three personalities in one subject named Lucie. Through hypnotic techniques, Janet was able to induce a trance in which a different personality or consciousness arose. The new personality was called Lucie 2 (by Janet), and although she was fully aware of Lucie 1, she denied that they were the same person. Upon further hypnosis, Janet discovered Lucie 3, a third personality who again was aware of both previous personalities. While each successive personality was aware of the other ones, the earlier ones had no knowledge of the existence of the later personalities. Lucie thus displayed three distinct personalities or consciousnesses (Allen, 94-96).
For James and the question of religion, the most significant aspect of Janet’s studies was not that patients exhibited successive personalities, but that they exhibited them at the same time. Such coexisting but independent personalities were demonstrated by such techniques as hypnotic suggestion and automatic writing. For instance, Janet would place a pencil in the subject’s hand and then distract him or her by some activity such as conversation. When a voice whispered a question in the subject’s ear, the subject’s other consciousness would spontaneously guide the hand to write the answer, while the primary consciousness remained preoccupied with the conversation. Janet claimed that in some people, consciousness was split into two or more entities, each of which ignored and complemented the sensibilities of the other (Allen, 99-100).
Janet attributed the split consciousness to mental or moral weakness. Possibly a hereditary trait, this weakness became apparent only when a traumatic event or events affected the brain. When the brain’s capacity to maintain focus was undermined, waking consciousness was split, allowing subconscious elements to displace normally waking ones. This opening of the subconscious accounted for the greater influence of suggestion and hypnosis in these persons (Taylor, 22). For Janet, the different consciousnesses added up to no more than one normal consciousness. James, however, took the theory a step further. He referred to cases in which the secondary consciousness transcended the possible normal consciousness (Allen, 108). James turned to the Subliminal Self of Frederic Myers to account for this supernormal consciousness.
Frederic Myers, a Victorian psychologist, classicist historian, and psychical researcher, had developed the concept of the Subliminal Self in his quest to give a scientific basis for immortality. Like Wordsworth, he believed that nature revealed a window to the spiritual world (Turner, 118). While Wordsworth expressed his spiritual attitude toward nature in his poetry, Myers turned to psychical research to collect the facts of nature concerning the spiritual. He was a founder of the Society for Psychical Research (SPR) in England. In an address to the SPR in 1901, James paid tribute to the studies of Myers. James noted how Myers had collected facts through the SPR, looked at all of them arranged in series, and made hypothetical connections where it was necessary (showing again James’ preoccupation with collection of facts as a scientific procedure).
The theory which Myers postulated to connect the facts was centered around the Subliminal Self. He identified the supraliminal or empirical consciousness as a part of consciousness evolved to deal with the natural environment. It was, however, not the only part. Other regions of consciousness allowed humans to be active in other realms of being; these other regions constituted the Subliminal Self. Myers explained hallucinations and active impulses as sensory and motor automatisms. Automatisms, to him, were merely symbolic messages from the subliminal to the supraliminal consciousness (Allen, 157-160). The Subliminal Self designated all the aspects above and below the supraliminal consciousness, including “the disintegrative streams of consciousness that were manifested in hysteria, the personality absorbing cosmic metetherial energy in sleep, and the personality rising to new spiritual awareness in ecstasy and sleep” (Turner, 124). This was the key step in James’ climb to the supernatural, for the subliminal consciousness accounted not only for those regions below, but also those above the supraliminal. Myers pointedly disavowed the view that “any perturbation of the ordinary personality is necessarily in itself an evil” (Myers, 6). In the phrase “above the supraliminal,” Myers included the passion of the artist and the inspiration of the scientist, attributing creative and artistic imagination to a “subliminal inrush of creative energy” (Turner, 130). More importantly to James, the subliminal consciousness also included the mystical realms of supernormal knowledge.
This positive aspect of the Subliminal Self was the distinguishing trait of Myers’ psychology, for others had already conceived of a subconscious realm whence mental disease originated. Myers asserted that “hidden in the deep of our being is a rubbish-heap as well as a treasure-house; — degenerations and insanities as well as beginnings of higher development” (Turner, 122-130). This was the bridge which James needed to cross from the ostensibly morbid nature of religious experience to glorified spiritualism. Myers’ theory parried the attacks of the medical materialists on the origin of the phenomena in question: “... In so far as they have to use the same organism, with its preformed avenues of expression — what may be very different strata of the Subliminal are condemned in advance to manifest themselves in similar ways” (Allen, 160). Thus, supernormal knowledge, mania, drivel, and deception all came to the surface through the same channels. This was the reason why mystical experiences were viewed as pathological. Such revelations, however, were not to be discounted merely because they were forced to emanate through the same channels as mental disease.
Religious Experiences as Subliminal Emanations
James indicated a division in the self as the underlying animus for the religious experience. The self was divided in a conflict between its higher and lower wishes. During the struggle for unification of these impulses, the person experienced the most intense grief and anguish. Not merely a battle between physical and spiritual impulses, the struggle also consisted of conflicts between multiple drives (Varieties, 156-158). The varying processes of unification brought characteristic sorts of relief. Although this resolution was a general psychological process, James focused on the religious form, which he termed conversion (Varieties, 165-176).
James introduced the concept of a “centre of personal energy” to help explain the process of conversion. According to this idea, every person, depending on the time and situation, focused his or her energy on a specific portion of consciousness. Whenever this centre of personal energy changed, the person underwent a transformation. This transformation was not necessarily religious in nature, but when it was, it constituted a conversion:
An athlete ... sometimes awakens suddenly to an understanding of the fine points of the game and to a real enjoyment of it, just as the convert awakens to an appreciation of religion. If he keeps on engaging in the sport, there may come a day when all at once the game begins to play itself through him — when he loses himself in some great contest. In the same way, a musician may suddenly reach a point at which pleasure in the technique of the art entirely falls away, and in some moment of inspiration he becomes the instrument through which music flows ... so it is with the religious experience of these persons we are studying. (Varieties, 192)
At this point, James expounded his psychological explanation. In such cases of transformation, the door to the subconscious had finally been opened. The tension and angst had reached the maximum capacity of the mind. Thus, the brain was no longer able to hold the supraliminal together to the exclusion of the subliminal; the breaks in the “accidental fences” had been found: “When the centre of personal energy has been subconsciously incubated so long as to be just ready to open into flower, ‘hands off’ is the only word for us, it must burst forth unaided!” (Varieties, 195). While Christianity asserted this self-surrender to the descent of an external supernatural force, psychology asserted it to subconscious impulses from within reaching normal consciousness.
According to his psychological explanations, those who experienced religious conversion were those who were more prone to opening the door to the subconscious. He even cited a study by Professor George A. Coe which showed that converts were prone to automatism and passivity while non-converts were more spontaneous and self-suggestive, implying that the non-converts were denied transformation by their spontaneous assertion of the impossibility of conversion (Varieties, 221-222). Despite this scientific explanation of religious conversion, James made a point not to exclude the possibility of divine influence: “It is conceivable that if there be higher spiritual agencies that can directly touch us, the psychological condition of their doing so might be our possession of a subconscious region which alone should yield access to them” (Varieties, 223). In this admission, James hinted at his sympathetic view toward the epistemological validity of religious experiences and the existence of a supernatural force.
James summarized religious experience by describing its two universal aspects. First, something was innately wrong with the subject. Second, only connecting with a higher power resolved this wrongness. James then hypothesized that the “more,” the higher power with which the religious subject connected beyond normal consciousness, was the “subconscious continuation of our conscious life.” Thus, there really was a higher power — it just happened to be part of ourselves. The “higher” control was exerted by the “higher faculties of our own mind;” thus, the sense of union with a higher power was literally true. From this truth individuals diverged into what James termed personal over-beliefs, the specific forms in which the truth became manifest, depending on their religious needs. These various over-beliefs, though they might negate each other, were indispensable for the individual (Varieties, 458-461).
In the final pages of Varieties, James finally revealed his own over-beliefs concerning religion. Given the truth of the union with a higher power, he designated that higher power supernatural. According to James there was a distinct supernatural connection to the subliminal consciousness. While our personalities mingled through our supraliminal consciousnesses in the physical world, we were also joined in the underlying “mother-sea” of subliminal consciousness:
Out of my experience, such as it is (and it is limited enough) one fixed conclusion dogmatically emerges, and that is this, that we with our lives are like islands in the sea, or like trees in the forest. The maple and the pine may whisper to each other with their leaves, and Conanicut and Newport hear each other’s foghorns. But the trees also commingle their roots in the darkness underground, and the islands also hang together through the ocean’s bottom. Just so there is a continuum of cosmic consciousness, against which our individuality builds but accidental fences, and into which our several minds plunge as into a mother-sea or reservoir. (Memories, 204)
It was the weaknesses in the fences which allowed the subliminal to manifest itself in the supraliminal world, “leaking in” on the natural world. Because the ideal (the religious sentiments from the subliminal realm) were given to influencing the real (the resulting conduct of the individual), the ideal was a definite part of reality (Varieties, 462-469). By exerting a force in the actual world, the supernatural became a reality itself. Thus, James’ psychological perspective of religious experience allowed him to develop a “piece-meal” supernaturalism, in which the ideal and the real were intertwined.
When James wrote Varieties at the turn of the century, science had recently gained dominance in popular as well as intellectual culture. For many, the cold, heartless assumptions of science shook the spiritual foundations of their lives. Was life reducible to physico-chemical mechanisms? Were humans mere automatons of nature? By collecting data on religious experiences and explaining them through psychological theory, James brought religion within the realm of science. Rather than ignoring the phenomena as “unclean,” James depended on science to give validity to the religious experiences. Through this empirical approach, James sought to give religious experiences power as guides for life — guides validated, not through faith, but through science.
The unique scientific approach of William James to religion provides an instructive comparison for the modern neuroscientist. Given that the latest research continues to reduce the functions of the brain to neurobiological mechanisms, one may easily assume the position that the wonders of the mind can likewise be reduced to mere mechanisms. Obviously neurobiological and psychological research are a crucial part of modern medicine and science, but, as responsible scientists, researchers must keep abreast of the larger implications of their work. They must maintain an open mind and continually challenge their assumptions. Especially as modern brain scientists continue to elucidate the higher functions of the brain, they should, like James, maintain an awareness of the larger implications of their work in a culture which places so much trust in science and so much hope in faith.
Allen, Gay Wilson, ed. (1971). A William James Reader. (Boston: Houghton
James, William. (1911). Memories and Studies. (New York: Longmans, Green, and Co.).
James, William. (1987) Varieties of Religious Experience. William James: Writings 1902-1910. (New York: Library of America).
Myers, Frederic W. H. (1976). The Subliminal Consciousness [1893-1894]. In Proceedings of the Society for Psychical Research. (New York: Arno Press), pp. 2-25.
Perry, Ralph Barton. (1935). The Thought and Character of William James. Vol. 2 of 2. (Boston: Little, Brown, and Co.).
Randall, John Herman. (1921) William James: The Philosopher. The New Light on Immortality. (New York: Macmillan), pp. 33-50.
Taylor, Eugene. (1982) William James on Exceptional Mental States: The 1896 Lowell Lectures. (New York: Charles Scribner’s Sons).
Turner, Frank Miller. (1974) Frederic W. H. Myers: The Quest for the Immortal Part. Between Science and Religion. (New Haven, CT: Yale University Press), pp. 104-133.
Tracey A. Cho '97 (firstname.lastname@example.org), a resident of Cabot House, is concentrating in History and Science, with a focus on neuroscience and modern European history. He is currently doing research on medulloblastoma child brain tumors under Scott Pomeroy at Boston Children’s Hospital.
Back To the Table of Contents
Insights into Infantile Autism
Infantile autism is a developmental disorder characterized by deficient communication, socialization, and imagination. Kanner first described the disorder in his 1943 paper entitled “Autistic disturbances of affective contact.” Kanner placed emphasis on the autistic child’s inability to relate to other people:
I was struck by the uniqueness of the peculiarities which Donald exhibited. He could, since the age of two and a half years, tell the names of presidents and vice presidents, recite the letters of the alphabet forwards and backwards, and flawlessly, with good enunciation, rattle off Twenty-Third Psalm. Yet he was unable to carry on an ordinary conversation. He was out of contact with people, while he could handle objects skillfully. His memory was phenomenal. The few times when he addressed someone- largely to satisfy his wants- he referred to himself as “You” and to the person as “I”. He did not respond to any intelligence tests but manipulated intricate form boards adroitly. (Kanner, 1943)
Kanner identified the following characteristic features of these children: extreme autistic aloneness, anxiously obsessive desire for the preservation of sameness, excellent rote memory, delayed echolalia, oversensitivity to stimuli, limitation in the variety of spontaneous activity, good cognitive potentialities, and highly intelligent families. He considered these findings “a unique syndrome not heretofore reported.” Almost simultaneously, in 1944, Asperger published a dissertation about the autistic psychopathy in children treated by him in Vienna. Asperger described a high-functioning form of autism, where mental retardation was not prominent. Ironically, Kanner and Asperger independently chose the term “autistic” as a label for their patients. They borrowed the term autism from Bleuler, who used it to describe schizophrenic patients’ withdrawal from social interaction. (In Greek, “autos” means “self.”) This choice reflects their independent observation that the impaired child’s social problems were the most striking feature of the disorder. It has been suggested that Kanner’s syndrome may represent either “classical” autism, or the prime mode of presentation of a number of related disorders. Kanner’s syndrome and Asperger’s syndrome may possibly occupy positions on a continuum of cognitive deficits characterized by impairment in reciprocal social interaction (Ciaranello and Ciaranello, 1995).
There is still great interest in understanding the neurobiology of infantile autism. The purpose of this paper is to review work done in the study of autism from the biologic viewpoint.
The Clinical Picture of Autism
Autism occurs in about 5 per 10,000 live births (Folstein and Piven, 1991). It varies in its clinical presentation, depending on both the severity and the extent of the underlying brain dysfunction. Severely autistic children may be retarded, mute, profoundly withdrawn, and preoccupied with repetitive activities. They often exhibit motor stereotypies such as hand-flapping, rocking, or head-banging. More mildly affected children may show deficits in social relatedness, yet they may have normal or even superior intelligence, with well-developed language skills. These children may be given the diagnosis of Asperger’s syndrome (Ciaranello and Ciaranello, 1995).
Autistic children display a wide range of intellectual abilities, from
profound mental deficiency to superior intelligence. It is generally thought
that less than 30 percent of autistic persons have intelligence in the
normal range (Dunn, 1994). Some autistic persons show exceptional talents
despite functional disability in general. Ten percent of autistic persons
have been reported to demonstrate savant abilities in music, drawing, or
calculation (Happe, 1994). Since autistic characteristics may occur in
children with superior intelligence, it seems likely that a neurologic
insult causing autism is not always accompanied by mental retardation (Ciaranello
and Ciaranello, 1995).
Evidence for an Organic Cause
There is evidence that many autistic children have severe brain dysfunction. Steffenburg (1991) found that almost 90 percent of a sample of autistic children had evidence of a brain abnormality. Recently it has been reported that a substantial proportion of cases are associated with megalacephaly, an abnormal enlargement of the head (Bailey et al., 1995; Bailey et al., 1993). Autistic persons may demonstrate a variety of other abnormal behaviors and cognitive deficits. These may include rigid adherence to routines, stereotypical movements, and bizarre use of objects. These findings, however, are not specific to autism, and serve to illustrate multiple etiologies of the syndrome.
Autistic behavior has been observed in association with a number of well-defined medical conditions. These conditions include the fragile X syndrome, congenital rubella, and tuberous sclerosis. None of these conditions, however, has a consistent association with autism (Rapin, 1991).
There is a 4:1 male preponderance in the incidence of autism, which suggests a genetic influence in a significant number of cases. A genetic liability is also supported by evidence from twin and sibling studies (Bailey et al., 1995; Folstein and Piven, 1991). There seems to be no single etiology, however, and it is not likely that a single pathophysiologic mechanism will account for all cases. It is more reasonable to consider a pathophysiologic model where damage to a certain brain region, called Region X, produces features found in autistic children. This model may be consistent with the observation that there is a direct relationship between the severity of autism and progressively lower IQs (Ciaranello and Ciaranello, 1995). The degree of severity may reflect the extent of insult to Region X. In fact, it is possible that a number of different events, such as genetic abnormality or birth trauma, may also cause damage to Region X, resulting in autism (Happe, 1994).
Searching for the Site of Dysfunction
“Region X,” the site of a critical insult to the brain that uniformly causes autism, has not yet been identified. Various anatomic sites in the brain have been hypothesized as the primary source of pathology. Five approaches have been used to study the neurobiology of autism. These methodologies include genetic, radiological, neurophysiological, neurochemical, and neuropathological studies. Genetic analyses include epidemiological studies, twin studies, family studies, linkage and association studies, and karyotypical analyses (Ciaranello and Ciaranello, 1995). Radiological studies use in vivo brain imaging techniques. These include computerized tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). Most neuroimaging studies have reported structural abnormalities in the brains of autistic persons, but the findings are inconsistent both within and across investigations.
A variety of neurophysiological methods, such as electroencephalography (EEG) and event-related potentials (ERP), have been used to study patterns of neural processing in patients with autism. Neurophysiologic deficits are widespread, but not specific to autism. Neurochemical studies attempt to link abnormal levels of neurotransmitters or their metabolites to autism. No consistent abnormality has emerged from these analyses. A number of reasons have been proposed to explain the inconsistency in the findings in various studies of the neurobiology of autism. Rutter and Bailey (1993) speculate that the inconsistencies in neuroimaging studies may result from heterogeneity in the clinical groups studied, methodologic shortcomings, and the lack of precise quantification of the findings. It is possible that medical conditions associated with autism give rise to their own structural brain abnormalities, for reasons unrelated to autism. Holttum et al.,. (1992) report that matching between autistic and control groups has been inadequate for variables such as age, IQ, gender, and socioeconomic status, which are likely to have a significant impact on the structure of the central nervous system. In fact, control groups have often been selected from radiologic files of individuals scanned for other neurologic complaints, but judged by a neuroradiologist to have a normal scan. Rutter argues that it is unlikely that a unilateral localized lesion could give rise to the cognitive and behavior deficits in autism. He hypothesizes that autism derives from a system abnormality. Therefore, any abnormalities revealed by neuroimaging may represent regions downstream from the basic underlying affected part of the brain.
Neuropathology of Autism
In recent years, it has been possible to perform quantitative histologic examination of autistic brains. Autopsy studies provide much greater resolution than neuroimaging. These studies have two main shortcomings. First, there is an inability to control for incidental factors that can affect the structure of the brain, such as seizures, mental retardation, and psychotropic medications. It is possible that the brain abnormalities identified in autopsy studies are associated not with autism itself, but with one of these incidental factors. A second limitation is the number of subjects studied. If autism has multiple etiologies, there is limited significance in a single pathological study. The results obtained from these studies could be little more than artifacts inherent in a small data base. Nevertheless, microscopic abnormalities noted to date have been consistent in type and location. Therefore, it is very likely that the neuropathologic data will provide insight into the organic nature of autism.
In 1985, Bauman and Kemper reported anatomic abnormalities in the brain of a 29-year-old man with well-documented autism. He was treated with antipsychotic and anticonvulsant medications. It was not known if he had been tested for any specific etiology of autism. In this study, serial sections of the brain were compared with those of an identically processed age- and sex-matched control. Since that report in 1985, five additional cases of patients with well-documented autism have been studied using the same methodology (Bauman and Kemper, 1994). The patients range in age from 9 to 29 years. Five of the autistic individuals were mentally deficient. One twelve-year-old boy was of average intelligence. Four of the six patients studied had a seizure disorder and were treated with anticonvulsant medications. Nevertheless, anatomic findings in all six cases are similar.
All of the brains were well-developed and showed no gross abnormalities. Gyral configuration and myelination appeared normal. Multiple areas of the cerebral cortex were examined, and no abnormalities of cellular structure, lamination, or number were found. Microscopic study of the thalamus, hypothalamus, basal ganglia, and basal forebrain found no differences compared to the controls. Subtle alterations in the size of neurons and the complexity of their processes were confined to the limbic system and cerebellum. In the limbic system, the hippocampal complex, subiculum, entorhinal cortex, amygdala, mammillary body, anterior cingulate gyrus, and septum are connected by neuronal circuits. In comparison with the brains of control subjects, the autistic brains showed reduced neuronal cell size and increased cell-packing density in these areas. Bauman and Kemper also observed that CA1 and CA4 pyramidal cells in the hippocampus showed decreased complexity and extent of dendritic arbors. The authors state that these features are characteristic of an immature brain. Therefore, there may be a curtailment of normal development in these structures in autism.
In the cerebellum, there was a marked decrease in the number of Purkinje cells and granule cells throughout the cerebellar hemispheres. There was no evidence of gliosis, suggesting that the lesion was acquired early in development. The anterior lobe and the vermis were largely spared. The most significant cell decrease was found in the posterior inferior neocerbellar cortex and adjacent archicerebellar cortex. Atrophy of the neocerebellar cortex was noted in the biventer, gracile, tonsil, and inferior semilunar lobules. Examination also revealed abnormalities in the emboliform, fastigeal, and globose nuclei in the roof of the cerebellum. These abnormalities appeared to differ with the age of the patient. In the three older autistic brains, these nuclei are characterized by small pale neurons, which are decreased in number. However, in the younger brains, the neurons in these nuclei are enlarged and are present in adequate numbers. Coincident with this finding was the observation that neurons in the inferior olivary nuclei of the brainstem failed to show evidence of the expected atrophy that is seen following perinatal or postnatal Purkinje cell loss. The olivary neurons appeared to differ in size and in number with the age of the patient, showing the same features as mentioned above.
The hypothesis that emerged from these findings was that the ontogeny of cerebellar circuitry is disrupted by the early loss of Purkinje cells. The enlarged neurons in the deep cerebellar nuclei and inferior olivary nucleus may represent the maintenance of a nonfunctional fetal circuit pattern. With maturation into adult life, the retained immature circuit is lost, and it is not replaced by adult circuitry. Therefore, the normal circuitry of the cerebellum does not develop, and the deep cerebellar nuclei and olivary nucleus show a reduction in cell size and number.
Interpreting the Neuropathologic Observations
The objective of this paper is to identify a link between neuroanatomic defect and autistic symptomatology. Studies to date indicate abnormalities in both the limbic system and the cerebellum of the autistic person.
Significance of Limbic System Abnormalities
Limbic structures of the forebrain include the limbic association cortex, the hippocampal formation, and the amygdaloid complex. The limbic association cortex includes the cingulate gyrus, parahippocampal gyrus, and other regions. It receives information from the higher order sensory areas, and conveys this information to the hippocampal formation and the amygdala.
The hippocampal formation consists of the subiculum, the hippocampus proper, and the dentate gyrus. Information flow through the hippocampal formation is unidirectional. First, the limbic association cortex, especially the cingulate gyrus, projects to the entorhinal cortex, which then relays information to the dentate gyrus. Then the dentate gyrus projects to the hippocampus, which connects to the subiculum. Finally, the subiculum then sends input back to the entorhinal cortex. Nearly all efferent projections from the hippocampal formation emerge from the subiculum. Fibers from the subiculum terminate in the mammillary body, which projects to the anterior thalamic nuclei. In turn, these nuclei project back to the cingulate gyrus. The subiculum also projects to the septal nuclei. This projection is part of a system of connections thought to be important in the expression of emotions, such as the body’s response to pleasure and pain (Roberts, 1992).
The amygdaloid complex can be grouped into three regions: the central nucleus, the medial nucleus, and the basolateral nucleus. Like the hippocampal formation, the central and medial nuclei of the amygdala are related to the limbic neocortex. In addition, the central and medial amygdaloid nuclei project to the reticulate core of the basal forebrain and brainstem. The basolateral nucleus is reciprocally connected with the cerebral cortex, receiving input from both higher order sensory areas and from association cortical areas. Therefore, the basolateral division may serve to attach emotional significance to a stimulus. Because of these closely interrelated circuits, abnormalities in the forebrain may disrupt both the circuitry and function of the amygdala and the hippocampal formation, the limbic and sensory association cortical areas, and their relationship to the reticulate core of the brain (Bauman and Kemper, 1994).
Experimental Lesion Studies
Patterns of abnormal behavior in autism have been compared with the severe disturbances observed in non-human primates that have undergone neonatal removals of the limbic system, specifically the amygdala and hippocampus. These animals failed to develop normal relationships, displayed blank facial expression and poor body language, showed memory deficits, and exhibited locomotor stereotypes such as twirling in circles and doing somersaults (Bachevalier, 1991). Like autistic children, the experimental monkeys exhibited considerable variability of specific symptoms. Bachevalier also bilaterally lesioned the amygdala in infant monkeys, and found that with maturation, these animals exhibited behavior similar to high-functioning autistic children (Bachevalier and Merjanian, 1994). These observations are consistent with the idea that variability in the extent of damage to Region X, in this case portions of the limbic system, may reflect the variation in autistic symptoms. Bachevalier acknowledges that the brain lesions inflicted to the infant monkeys are different from the neuropathology found in the medial temporal lobe structures. However, she believes that the behavioral similarities between autistic children and these monkeys are close enough to encourage future study. These monkeys may serve as an animal model of infantile autism.
Animal research has revealed that insults to the developing brain result in widespread reorganization of cortical architecture and connectivity. Restructuring connections may have an adverse effect on the organization of the brain, by stimulating the development of alternative neural pathways that do not work as well. Thus, in the developmental context, there is no reason to assume that specific cognitive deficits may be correlated with specific neuroanatomic structures in a static fashion (Berstein and Waber, 1990). Damage to the developing brain should result in vastly different cognitive and behavioral sequelae that damage to a mature brain. Adult damage studies are useful since they reveal what kinds of behaviors are mediated by particular structures. Damage to other regions found to be abnormal in the autistic brain have elucidated the roles of specific limbic structures. Bilateral lesions to the amygdala in adult monkeys have caused behavioral abnormalities (reported in Bauman and Kemper, 1994). These monkeys withdrew from social interactions, showed no fear of aversive stimuli, examined objects compulsively, and demonstrated a diminished ability to attach meaning to a specific situations after prior experience. Like autistic children, these monkeys were unable to adapt to new environmental settings.
Murray and Mishkin (1985) report that monkeys with bilateral amygdalectomies show severe impairment in crossmodal associations. This finding suggests that abnormal circuitry in the amygdala could cause an impairment in generalizing information from one experience to another. Many autistic individuals demonstrate this problem.
Damage to the temporal lobes in adult animals has also produced behaviors that resemble those seen in autism (Damasio and Maurer, 1978). Kluver and Bucy (1939) performed bilateral surgical ablation of the medial temporal lobes in macaques, producing purposeless hyperactivity, impaired social interaction, hyperexploratory behavior, and the inability to recognize or remember the meaning of familiar objects. Bilateral hippocampal lesions to rats resulted in hyperactivity, sterotypic motor behavior, and a disordered response to novel stimuli (Kimbel, 1963).
Structures in the medial temporal lobe are critical to normal memory functions. These structures include the amygdaloid complex, the hippocampal formation, and the surrounding cortical areas. The role of the medial temporal lobe in memory has been illustrated by examination of amnesic patients and nonhuman primates rendered amnesic. For example, H.M. had the medial portion of the temporal lobes removed bilaterally for the relief of epilepsy. Despite the preservation of intelligence, perception, and language, H.M. could no longer consolidate short-term memory into long-term memory. However, amnesic patients like H.M. can learn certain motor or procedural skills. H.M. performed a mirror tracing task with increasing accuracy, though he denied trying the task before. This observation and others led to the proposal of two different neural systems for memory: habit or procedura memory, and representational or declarative memory. Habit memory is preserved in amnesic patients, and it is thought to be mediated by stimulus-response connections rather than by conscious memory. Habit memory includes the acquisition of skills and habits, certain kinds of conditioning, and the phenomenon of priming. Representational memory involves recognition, retention, and recall of information. Representational memory is accessible to conscious awareness, and it enables information to be integrated for higher-order cognition. These two systems are thought to be anatomically separate. Habit memory may be subserved by a “cortico-striatal system,” while representational memory may be subserved by “corticolimbic system.” In autism and amnesia, the neurobiologic substrate for representational memory appears to be abnormal. The substrate for habit memory, however, seems to be spared.
Why do autistic children suffer from impairments not seen in amnesic adults? This difference is likely to be due to the fact that damage has occurred during different periods of development (Bachevalier and Merjanian, 1994). The medial temporal lobe memory system performs a critical role beginning at the time of learning, since it encodes representations into long-term memory (Squire and Zola-Morgan, 1991). The hippocampus and related structures may “bind together” ordinarily unrelated events or stimulus features that have been processed by distinct cortical sites. This system is crucial for the acquisition of new information. Therefore, early memory loss would cause severe damage to the development of cognitive abilities. Unlike amnesic adults, “amnesic infants” have never had the ability to form long-term memories. Unable to acquire and integrate information from novel stimuli, the autistic child would suffer severe deficits in cognition, social interaction, language. The preservation of the habit memory system accounts for many characteristics of the autistic child. It is consistent with Kanner’s observations of an “anxiously obsessive desire for the preservation of sameness,” “excellent rote memory,” and “limitation in the variety of spontaneous activity” in autism. The preservation of the habit memory system might even account for the acquisition of skills by autistic savants.
Significance of Cerebellar Abnormalities
Courchesne and colleagues (1988) reported reported selective hypoplasia of cerebellar vermal lobules VI and VII on midsagittal MR images. Courchesne hypothesized that damage to the cerebellum may cause the cognitive and behavioral manifestations of autism. This study has several methodologic limitations, including the lack of matching between autistic and control groups on age, gender, IQ, the selection of controls from radiologic files of individuals scanned for other neurologic complaints, and the lack of screening for fragile X syndrome, which is associated with cerebellar hypoplasia. It was initially assumed that these findings were consistent with the neuropathologic changes. On closer inspection, however, it is clear that the histologic abnormalities are not equivalent in nature or location to the MRI abnormalities. The MRI studies emphasize a reduction in the area of the neocerebellar vermis, while the autopsy studies indicate that cell loss is greatest in the lateral inferior regions of the cerebellar hemispheres (Holttum et al., 1992). Varying degrees of cell loss were reported throughout the cerebellum, primarily in the posterior inferior neocerebellar cortex and adjacent archicerebellar cortex. There was cell loss in the vermis, but there was no selective hypoplasia of lobules VI and VII.
Several other imaging studies failed to find selective hypoplasia of vermal lobules VI and VII in autism (Ritvo and Garber, 1988; Holttum et al.., 1992; Kleiman et al.., 1992; Piven et al.., 1992). In 1994, Courchesne et al. reported two different subtypes of autistic patients: a subgroup with hypoplasia and a subgroup with hyperplasia. There was a large degree of overlap between these two groups. Courchesne et al. suggest that the negative results of other imaging studies may have resulted from the inclusion of a small sample of autistic subjects with hypoplasia and hyperplasia. Averaging the values would result in failure to detect abnormalities. Further research is necessary to address this issue.
The Cerebellum and Higher Cortical Functions
The role of the cerebellum in motor control is well established, yet the possibility that the cerebellum participates in higher cortical function has been proposed by Leiner et al. (1986). These authors focus on the role of the dentate nucleus, a region of the cerebellum that is prominent in human brain development in comparison with the same nucleus in lower primates and subprimate species. Leiner maintains that the connections between the dentate nucleus and the frontal cortex enable the human cerebellum to modulate such processes as language, memory, and emotional states, as well as motor-sensory processes. Evidence from animal investigations indicates that there are direct projections of the cerebellar fastigial nuclei to the septal region, hippocampus, and amygdala (Heath and Harper, 1974; Heath et al., 1978). Heath and fellow investigators believe that the cerebellum serves an important role in the modulation of emotion and higher functions. Heath reports that stimulation of the cerebellum has been used to treat patients with seizures, depression, anger, and aggression. Like persons with infantile autism, patients with acquired lesions to neocerebellar structures are impaired in their ability to shift direction of attentional focus (Courchesne et al., 1994).
An examination of the cerebellar connections to cortical association areas, in the parietal, temporal, and frontal lobes, provides an explanation for the role of the cerebellum in higher brain function. Cortical association areas convey information to the cerebellum by projecting in the pons, which then relays information to the cerebellar cortex. From the cerebellar cortex, information is projected to the dentate nucleus, which in turn sends its axons to the thalamus, via the red nucleus. The feedback loop is completed by thalamic projections to the cortical association areas. Anatomical studies also reveal the existence of pathways linking the cerebellum with regions of the limbic system. The cingulate gyrus, hypothalamus, mammillary bodies, and the posterior parahippocampal region have been shown to project to the pons (Schmahmann, 1994). Schmahmann proposes that these limbic connections might underlie a cerebellar involvement in certain aspects of memory and learning. The precise mechanism of cerebellar modulation of higher function is unknown. It is possible that information from the cortical association areas and the limbic system undergoes modulation in the cerebellum (Schmahmann, 1994). Schmahmann suggests, “In the same way as the cerebellum regulates the rate, force, rhythm, and accuracy of movements, so it may regulate the speed, capacity, consistency, and appropriateness of mental or cognitive processes.” He speculates that the cerebellar abnormalities in autism may represent damage to part of the neural circuitry that underlies behaviors such as language, societal interaction, and emotion. Thus, cerebellar lesions may cause disturbances in emotion, behavior, and learning. Future studies should address the effect of comparable lesions during development.
Neuropathologic studies of autism identify abnormalities in regions that may be critiocial interaction, language, and memory. Bauman and Kemper maintain that these defects are acquired early in development and are be related to the difficulties in social interaction, language, and learning. There is scant evidence, however, to support the hypothesis of a well localized “Region X” that results in autism when insulted. Postmortem studies have been conducted on subjects afflicted with other disorders, in addition to autistic behavior, such as grand mal seizures. Virtually all of these subjects had been treated long term with a variety of medications. Obviously, it is difficult to identify the specific loci of anatomic pathology in autism in subjects afflicted with a number of diseases. In fact, it is possible that autopsy studies have revealed brain abnormalities characteristic of mental deficiency in general, rather than the specific diagnostic entity called autism. In future studies, case histories must be matched to modern neuropathological and neuroimaging techniques.
Due to the limited availability of autopsy material, researchers will continue to rely on neuroimaging studies. The development of increasingly advanced brain imaging techniques will facilitate the search for “Region X,” which will more likely be found to be “Regions X.” For example, functional neuroimaging of memory function in autistic persons may reveal a relationship between the clinical symptoms and both limbic and cerebellar pathology. It must be recognized, however, that autism is a clinical presentation, a syndrome of findings, that probably occurs as a consequence of a number of different diseases.
American Psychiatric Association. (1987). Diagnostic and Statistic Manual of Mental Disorders (3rd Edition Revised). (Washington, D.C.: American PsychiatricAssociation), pp. 33-39.
Bachevalier, J. (1991). An animal model for childhood autism. In Advances in Neuropsychiatry and Psychopharmacology, Volume 1:Schizophrenia Research. C.A. Tamminga and S.C. Schulz, eds. (New York: Raven Press, Ltd).
Bachevalier, J., Merjanian, P.M. (1994). The contribution of medial temporal lobe structures in infantile autism: A neurobehavioral study in primates. In The Neurobiology of Autism. M.L. Bauman and T.L. Kemper, eds. (Baltimore: The Johns Hopkins University Press), pp. 146-169.
Bailey, A., Luthert, P., Bolton, P., Le Couteur, A., Rutter, M., and Harding, B. (1993). Autism is associated with megalencephaly (letter). Lancet 341: 1225-6.
Bailey, A., Le Couteur, A., Gottesman, O., Bolton, P., Simonoff, E., Yuzda, E., and Rutter, M. (1995). Autism as a strongly genetic disorder: evidence from a British twin study. Psychological Medicine 25: 63-77.
Bauman M.L.and Kemper, T.L. (1985). Histoanatomic observations of the brain in early infantile autism. Neurology 35: 866-74.
Bauman, M.L. (1991). Microscopic neuroanatomic abnormalities in autism. Pediatrics 87: 791-6 (supplement).
Bauman, M.L. and Kemper, T.L. (1994). Neuroanatomic observations of the brain in autism. In The Neurobiology of Autism. M.L. Bauman and T.L. Kemper, eds. (Baltimore: The Johns Hopkins University Press), pp. 119-145.
Bernstein, J.H. and Waber, D.P. (1990). Developmental Neuropsychological Assessment: The Systemic Approach. In Neuromethods: Volume 17, Neuropsychology. A.A. Boulton, G.B. Baker, and M. Hiscock, eds. (Clifton, NJ: Humana Press), pp. 311-371.
Ciaranello, A.L. and Ciaranello, R.D. (1995). The neurobiology of infantile autism. Annu. Rev. Neuroscience 18: 101-28.
Ciesielski, K.T. and Knight, J.E. (1994). Cerebellar abnormality in autism: a nonspecific effect of early brain damage? Acta Neurobiologiae Exp. 54: 151-154.
Courchesne, E., Yeung-Courchesne, R., Press, G.A., Hesselink, J.R., and Jernigan, T.L. (1988). Hypoplasia of cerebellar vermal lobules VI and VII in autism. New England Journal of Medicine 318: 1349-54.
Courchesne, E. (1991). Neuroanatomic imaging in autism. Pediatrics. 87: 781-90 (supplement).
Courchesne, E., Saitoh, O., Yeung-Courchesne, R., Press, G.A., Lincoln, A.J., Haas, R.H., and Schreibman, L. (1994). Two subtypes of cerebellar pathology detected with MR in patients with autism: hyperplasia and hypoplasia of vermal lobules VI and VII. American Journal of Roentgenology 162: 123-130.
Courchesne, E., Townsend, J.P., Akshoomoff, N.A., Yeung-Courchesne, R., Press, G.A., Murakami, J.W., Lincoln, A.J., James, H.E., Saitoh, O., Egaas, B., Haas, R.H., Schreibman, L. (1994). A new finding: Impairment in shifting attention in autistic and cerebellar patients. In Atypical Deficits inDevelopmental Disorders: Implications for Brain Function.. S.H. Broman and J. Grafman, eds. (Hillsdale, NJ: Lawrence Erlbaum Associates).
Damasio, A.R. and Maurer, R.G. (1978). A neurological model for childhood autism. Arch. Neurol. 35: 777-83.
Dunn, M. (1994). Neurophysiologic observations in autism and implications for neurologic dysfunction. In The Neurobiology of Autism. M.L. Bauman and T.L. Kemper, eds. (Baltimore: The Johns Hopkins University Press), pp. 45-65.
Folstein, S.E. and Piven, J. (1991). The etiology of autism: Genetic influences. Pediatrics 87: 767-73 (supplement).
Gillberg, C. and Coleman, M. (1992). The biology of the autistic syndromes-2nd edition. (London: Mac Keith Press).
Happe, F. (1995). Autism: an introduction to psychological theory. (Cambridge, MA: Harvard University Press).
Holttum, J.R., Minshew, N.J., Sanders, R.S., and Phillips, N.E. (1992). Magnetic resonance imaging of the posterior fossa in autism. Biol. Psychiatry 32: 1091-1101.
Heath R.G. and Harper J.W. (1974). Ascending projections of the cerebellar fastigial nucleus to the hippocampus, amygdala, and other temporal lobe sites: evoked potential and histological studies in monkeys and cats. Exp. Neurol. 45: 268-87.
Heath R.G., Dempsey C.W., Fontana C.J., and Myers W.A. (1978). Cerebellar stimulation: effects on septal region, hippocampus, and amygdala of cats and rats. Biol. Psychiatry 113: 501-29.
Kanner, L. (1943). Autistic disturbances of affective contact. Nervous Child 2: 217-50.
Kimble, D.P. (1963). The effects of bilateral hippocampal lesions in rats. J. Comp. Physiol. Psychol. 56: 273-83.
Kleiman, M.D., Neff, S., and Rosman, N.P. (1992). The brain in infantile autism: Are posterior fossa structures abnormal? Neurology 42: 753-60,
Kluver, H. and Bucy, P. (1939). Preliminary analysis of functions of the temporal lobes in monkeys. Arch. Neurol. Psychiat. 42: 979-1000.
Leiner, H.C., Leiner, A.L., and Dow, R.S. (1986). Does the cerebellum contribute to mental skills? Behavioral Neuroscience 100: 443-54.
Minshew, N. (1991). Indices of neural function in autism: Clinical and biologic implications. Pediatrics 87: 774-780 (supplement).
Minshew, N. and Dombrowski, S.M. (1994.) In vivo neuroanatomy of autism: neuroimaging studies. In The Neurobiology of Autism. M.L. Bauman and T.L. Kemper, eds. (Baltimore: The Johns Hopkins University Press), pp. 66-85.
Murray, E.A. and Mishkin, M. (1985). Amygdalectomy impairs crossmodal association in monkeys. Science 228: 604-6.
Murray, E.A. and Mishkin, M. (1983). Severe tactual memory deficits after combined removal of the amygdala and hippocampus. Brain Research 270: 340-3.
Nelson, K.B. (1991). Prenatal and perinatal factors in the etiology of autism. Pediatrics 87: 761-66 (supplement).
Piven, J., Nehme, E., Simon, J., Barta, P., Pearlson, G., and Folstein, S. (1992). Magnetic resonance imaging in autism: Measurement of the cerebellum, pons, and fourth ventricle. Biol. Psychiatry 31: 491-504.
Rapin, I. (1991). Autistic children: Diagnosis and clinical features. Pediatrics 87: 751-60 (supplement).
Ritvo, E. and Garber, J.H. (1988). Cerebellar hypoplasia and autism. New England Journal of Medicine 319: 1152 (abstract).
Roberts, P.A. (1992). Neuroanatomy. 3rd edition. (New York: Springer-Verlag).
Rubenstein, J.L., Lotspeich, L., and Ciaranello, R.D. (1990). The neurobiology of developmental disorders. In . Advances in Clinical Child Psychology. vol. 13. B.B. Lahey and A.E. Kazdin, eds. (New York: Plenum Press), pp. 1-52.
Rutter, M. and Bailey, A. (1993). Thinking and relationships: mind and brain (some reflections on theory of mind and autism). In Understanding other minds: Perspectives from autism. S. Baron-Cohen, H. Tager-Flusberg, and D.J. Cohen, eds. (Oxford: Oxford University Press), pp. 481-504.
Schmahmann, J.D. (1994). The cerebellum in autism: Clinical and anatomic perspectives. In The Neurobiology of Autism. M.L. Bauman and T.L. Kemper, eds. (Baltimore: The Johns Hopkins University Press), pp. 195-226.
Squire, L.R. and Zola-Morgan, S. (1991). The medial temporal lobe memory system. Science 253: 1380-1385.
Steffenburg, S. (1991). Neuropsychiatric assessment of children with autism: A population-based study. Developmental Medicine and Child Neurology 33: 495-511.
Treffert, D.A. (1989). Extraordinary People: Understanding “Idiot Savants”. (New York: Harper and Row, Publishers).
Amy Herman ‘'97 (email@example.com), a Biology concentrator, lives in Cabot House. She is interested in cognitive and physiological aspects of brain development, particularly in developmental disorders such as dyslexia, autism, and Williams Syndrome. She is investigating sex differences in the brain’s susceptibility to developmental injury for her thesis towards an A.B./A.M. degree in Biology.
Back To the Table of Contents
and Late Onset as Subdivisions of Alzheimer's Disease
Alzheimer’s Disease is the leading cause of senile dementia. At one
time, all forms of Alzheimer’s Disease were lumped into one category. More
recent research reveals the possibility of subtypes of the disease, manifesting
themselves at different points in an individual’s lifetime. Early onset
and late onset Alzheimer’s differ in terms of their symptomatic, biological,
genetic, neurophysiological and neurological characteristics.
Although research on symptomatic differences in Alzheimer's Disease has been conducted since the 1970’s, the bulk of the research has occurred since the 1980's in response to a 1983 article by Seltzer and Sherwin. The article posited the question of whether early and late onset Alzheimer’s were “one entity or two.” The authors discussed characteristics of the disease which seemed to vary depending on the age of onset. Notable differences between the groups led Seltzer and Sherwin to conclude that early onset (classified as patients whose symptoms arose before the age of 65) was a different type of Alzheimer’s than late onset.
A crucial difference between early and late onset patients which Seltzer and Sherwin studied was language dysfunction. Early onset was associated with more language deficits. Early onset patients had more cases of aphasia (the loss or impairment of ability to use or understand speech) than the late onset patients. Filley et al. (1986) also found increased language impairment in early onset patients (those under 65) and found decreased visuoconstructional functioning, as determined by increased errors in judging spatial-relations in drawings, in late onset patients. This double dissociation (early onset with worse language impairment; late onset with worse spatial relations) suggests two separate subtypes of the disease. The groups of patients perform at opposite levels within each area; therefore, the relative impairment of their functions must be different.
Previously, researchers had hypothesized that early onset might simply be an accelerated form of Alzheimer’s Disease. However, this dissociation shows that the damage in early onset is not always more severe; in the case of visuoconstructional functioning, the early onset patients actually have retained more ability than the late onset patients. Therefore, early onset may not be merely a more malignant type of Alzheimer’s. Rather, the two subtypes differ in the areas of impairment. Since language skills are associated with the functioning of the left hemisphere and visuoconstructional skills with the right, this finding also supports Seltzer and Sherwin’s assertion of selective hemispheric vulnerability, with early onset associated with a left hemisphere vulnerability and late onset more influenced by right hemisphere damage.
Jacobs et al (1994) also found notable differences in the areas of impairment between the early and late onset patients. Once again, 65 years was the cut-off point for early onset. The Modified Mini-Mental State Examination (MMMS) was used to test the two groups of patients. This test is divided into two different sections: attention-related tasks and recall/naming tasks. The attentional tests consist of repeating digits forwards and backwards, mental calculations, and sentence repetition. The recall/naming tasks involve item recall, where a number of items are shown to the patient who must then recall what he/she has just seen. Other recall/naming tasks require the patient to recall the current and past four presidents. Confrontation naming of items is another task.
Early onset patients performed worse than late onset patients on the attention-related parts of the exam, while late onset subjects scored worse than early onset patients on recall/naming sections. Jacobs et al. therefore established another double dissociation between the early and late onset patients. Overall, the scores on the MMMS were similar; thus, the impairment of the early and late onset patients was equal. The early onset patients obtained an average score of 38.00, while the late onset patients had a comparable score of 38.30. The differences between the groups were in the areas of impaired and preserved abilities. On the attention sections, the late onset patients scored 21, while the early onset only scored 18. On the recall/naming sections, the late onset patients’ scores dropped to 15, while the early onset patients had an average score of 17.8 (Jacobs et al., 1994).
Since the early onset patients did not show a greater general dementia, these findings imply that early onset differs in the areas of the brain which are targeted, rather than only in the rate of progression. The early onset patients appear to be hit harder in attention-related areas of memory, while the late onset patients appear to have more damage in areas related to recall and recognition.
Evidence pointing towards subtypes is substantial. Early and late onset cases seem to differ in rate of progression and extent of language dysfunction or visuoconstructional impairment. Additional evidence for the subtypes comes from neurological and neurophysiological studies which confirm that early and late onset Alzheimer’s have different neurochemical characteristics.
Neurological, Neurophysiological and Biological Characteristics
All Alzheimer’s patients share some common neurological damage. The distinctive features within the brain (as determined by autopsy) are neocortical senile plaques and neurofibrillary tangles (Rosser et al., 1984). Alzheimer’s patients tend to have a decreased volume of gray matter when compared to controls of equal age (Shear et al., 1995). The Alzheimer’s brain also has numerous neurochemical deficiencies. A reduction in somatostatin may be responsible for the uncontrolled growth-promoting processes which commonly occur in the cortex of those afflicted with Alzheimer’s (Selkoe, 1992). Somatostatin acts to inhibit the production of the growth hormone. Therefore, whensomatostatin levels are reduced, the hormone is not effectively impeded, leading to unregulated growth (Palmer et al., 1988). Serotonin levels are also reduced, which has been associated with the greater aggression seen in some of the patients (Palmer et al., 1988). One other neurotransmitter which is markedly reduced is acetylcholine. Acetylcholine reduction has been noted even within the first year of symptom onset (Katzman, 1986). All of these neurological abnormalities are present in both early and late onset patients.
However, patients with earlier onset tend to have more widespread cholinergic deficits (the cholinergic system consists of the nerves which receive messages via the neurotransmitter acetylcholine), whereas in late onset patients, the cholinergic deficits are confined to the temporal lobe and hippocampus (Chui et al., 1985). Early onset patients also tend to have more abnormalities of noradrenaline and other neurotransmitters. Such observations led Rosser and colleagues (1984) to conclude that Alzheimer’s disease in younger patients may be a distinct form of dementia which differs from the later onset dementia. This assertion has been backed by further studies. Shear et al. (1995) discovered more volumetric abnormalities with early onset patients and found that these people tended to have more rapid neuronal deterioration than the older patients. In addition to these findings, studies by Prohovnik and colleagues (1989) showed a greater loss of gray matter in early onset patients.
Biological factors may also contribute to age of onset. All Alzheimer’s patients have biological changes occurring in cells outside of the central nervous system. Many of these changes alter the cell membrane structure or function (Zubenko et al., 1988). However, the types of biological changes which occur depend on the age of onset. Zubenko et al. (1988) found that patients with increased platelet membrane fluidity tended to have an earlier age of onset. These patients showed a more rapid decline. One other interesting discovery was that these patients were more likely to have a family history of dementia.
Early onset is characteristic of the familial form of Alzheimer’s (Heston et al., 1981). First degree relatives (children) of Alzheimer’s patients have an increased risk of developing the disease themselves (Nalbantoglu et al., 1990). Many Alzheimer’s cases have been linked to a mutation of the amyloid precursor protein gene on chromosome 21. Early onset has been shown to be connected with another mutation on chromosome 21. No such marker has been associated with late onset Alzheimer’s (Nalbantoglu et al., 1990).
Breitner et al.(1986) found that Familial Alzheimer’s is a rare form of the disease, highly correlated with early onset. In addition, he showed that most of the non-pedigree Alzheimer’s cases had a much later onset. In further studies he proposed that genetic factors could also be responsible for late onset Alzheimer’s (Breitner et al., 1988). In his 1992 article, he suggested that late onset Alzheimer’s could be caused by autosomal dominant inheritance just as with the early onset form, but with reduced penetrance, so not all individuals with the disease develop symptoms. While early onset Alzheimer’s may not be the only form of inherited Alzheimer’s, early onset Familial Alzheimer’s can still be separated from inherited late onset Alzheimer’s by the specific locations of genetic mutations and the extent of their penetrance.
Further exploration of whether early and late onset are different subdivisions could suggest important differences in the treatment of patients based on their type of Alzheimer’s. Because of the differences in neurological damage, it seems likely that the disease may stem from different causes or affect different areas of the brain. Especially in the cases of Familial Alzheimer’s, different genetic defects may be responsible for early and late onset. More concrete divisions of these two areas may facilitate future research for treatments such as gene therapy.
Pharmacological approaches to Alzheimer’s Disease focus on altering the specific biochemical pathways of the disease and trying to arrest symptoms at different stages of the disease by lessening the cholinergic deficits (Selkoe, 1992). Since early and late onset have different areas of selectivity and different amounts of cholinergic deficits, it seems likely that different treatments should be utilized in the separate cases.
Transmitter replacement therapy may also help lessen the widespread neuronal pathology in the intermediate and late stages of the disease by promoting neurotransmitter production. This therapy could be used to affect neurons that release a number of neurotransmitters, including acetylcholine, norepinephrine, serotonin, somatostatin, corticotropin releasing factor, and glutamate (Selkoe, 1992). Davis and Mohs (1982) found that patients showed memory enhancement when given acetylcholine precursors to help increase the production of acetylcholine, thus improving the cholinergic system. Palmer et al. (1988) determined that patients with behavioral problems have damage to the serotonergic system. The loss of serotonin may cause the aggressive behavior displayed by these patients. Additional research could determine other neurotransmitters which are responsible for the different symptoms or stages of Alzheimer’s and whether different ones are involved in early and late onset. This knowledge could help determine which treatments, or therapies, to try first to alleviate some of the symptoms.
Because of the complex pattern of neurotransmitter and neuropeptide abnormalities in early onset patients, it would be prudent to look for sources of treatment other than simple cholinergic replacement (Filley et al., 1986). While such treatment might work with the later onset patients, success would be more difficult with the early onset patients since their neurological damage is widespread, affecting more areas of the brain. Their symptoms could also be difficult to appease by transmitter therapy, since there are so many neurotransmitter reductions responsible for their cognitive deficits.
A more concrete subdivision of early and late onset could recommend certain treatments above others. Some treatments may be more damaging to early onset patients. Antidepressants are helpful in cases with clinical depression. While early onset patients are most likely to suffer from depression, they also tend to have the most widespread cholinergic damage (Selkoe, 1992). Antidepressants are known to accelerate cholinergic damage and to enhance the intellectual symptoms of the disease. Therefore, while the natural assumption may be to administer antidepressants in the early onset cases, it may be prudent to refrain from giving these patients antidepressants until their depression demands that such a step be taken. Rovner (1992) found that many behavioral difficulties, including depression, may be managed more effectively by interventions with the caretakers to help the patient, rather than by management with medication.
Rovner (1992) also found that patients with brain damage tend to be highly sensitive to some of the side effects of medicines . Since early onset patients have more brain damage than late onset patients, they may be affected by more of the adverse side effects of medications. By understanding the different reactions patients could have to medications, more educated decisions can be made about the methods of treatment which should be used. Until the symptoms of early onset patients reach incredibly severe states, it may be more beneficial to try family therapy techniques or other non-medical strategies for treatment.
On the other hand, some medical treatments may benefit early onset patients more than late onset. For example, memory problems in early onset patients are heightened by alertness and attention deficiencies. Specific neurotransmitters are involved with attention, mainly acetylcholine and histamine (Oken et al., 1994). Therefore, induced production of these neurotransmitters by medication or transmitter replacement therapy could alleviate some of the attentional problems of the early onset patients and help their memory retrieval.
Another interesting method which may aid in memory enhancement is a spaced-retrieval technique (McKitrick, 1992). While this method does not repair any of the physical damage caused by Alzheimer’s, researchers are optimistic that the technique may allow better memory retention for at least some Alzheimer’s patients. Once again, the ability to further subtype the groups of patients into different classes or stages of the disease could allow better ideas about which groups of people would benefit from such training and at what stages of the disease the technique should be attempted.
In addition to determining which medications or therapies to try, subtyping Alzheimer’s may also help to establish different progression rates or different courses of decline, depending on the particular division of the disease. Looking at the overall course of the disease may eventually help preventative measures to be taken. If the progression of the disease can be mapped, either by symptoms present or by the biological, neurological, or neurophysiological damage present, it may be possible to ameliorate some of the symptoms, or to lessen the physical damage, by taking preventative measures or attempting transmitter therapy.
Weiner (1991) suggested that pharmacological treatments are not successful because the patients have already had Alzheimer’s Disease for a long period of time. With a clearer map of when symptoms might arise, warning signs will be recognized, allowing treatment before the symptoms have become debilitating.
It has been discovered that the rate of decline, at least in cases of inherited Alzheimer’s, is severely lessened if the age of onset is increased by even just five years. Therefore, the morbidity from Alzheimer’s could be reduced if new strategies succeed in determining those people who are predisposed to the disease and if their development of the disease could be delayed by even just a few years (Breitner, 1992).
In cases of Familial Alzheimer’s, it would be helpful to determine which family members are most at risk. If the people at risk could be pinpointed at an earlier age, it might be possible to treat some of the symptoms before they have completely manifested themselves. It may be possible to isolate certain genes, or to determine certain precursor symptoms, which could determine people who are most likely to develop Alzheimer’s. If this information were available, it might be possible to conduct gene therapy or to determine ways of preventing further neurological damage.
A better map of progression could help prepare family members to watch for future symptoms and certain warning signs. It could also aid in determining when a nursing home environment should be looked into as an option rather than continuing home care. Heyman et al. (1987) found that results of tests for language disability, memory loss, and other cognitive functions served as good predictors for when the patients would be admitted into nursing homes. Therefore, even at this early stage, when the data is still fairly inconsistent, some form of progression map has been made with enough accuracy to predict the future course of the disease.
A more definitive map of the progression for different subtypes could also lead to a more accurate assessment of the success of certain treatments. For example, some patients who may seem to be responding positively to a certain treatment may actually have a course of the disease which progresses more slowly. In such cases, their seeming response to the treatment may actually be from the effect of their more benign form of the disease (Mayeux et al., 1985).
Alzheimer’s Disease confronts us with many unanswered questions, including the question of the validity of subdivisions within the disease. Future investigations should continue to focus on symptomatic differences as well as the specific neurological or biological deficits present in the two groups of patients. In the near future researchers will likely isolate specific physiological differences for early and late onset Alzheimer’s. These differences could be important clues for determining which medical or therapeutic treatments to administer. Further research could also help in finding preventative measures to stop neurological damage from occurring or to lesson some of the symptoms. Symptoms may even be caused by different neurological deficits depending on the subtype of the disease. Increased understanding about these types of differences would allow the administration of different preventative medicines.
Additional tests may allow better estimates of rate of decline. If the two subtypes do follow different patterns of decline or have symptoms appearing at different points, understanding the different progressions could allow better predictions about rate of deterioration. Clearer maps of symptomatic progression and rate of decline for each subdivision would increase the opportunity for preventative medicine and help recommend when medical treatments should be administered to be most beneficial.
Breitner, J.C.S. (1992). Clinical genetics and genetic counseling in Alzheimer’s Disease. Journal of Geriatric Psychiatry 25(2): 229-245.
Breitner, J.C.S. and Magruder-Habib, K.M. (1989). Criteria for onset critically influence the estimation of familial risk in Alzheimer’s Disease. Genetic Epidemiology 6: 663-669.
Breitner, J.C.S., Murphy, E.A., and Folstein, M.F. (1986). Familial aggregation in Alzheimer dementia - II. Clinical genetic implications of age-dependent onset. Journal of Psychiatry 20: 45-55.
Breitner, J.C.S., Silverman, J.M., Mohs, R.C., and Davis, K.L. (1988). Familial aggregation in Alzheimer’s Disease: Comparison of risk among relatives of early- and late-onset cases, and among male and female relatives in successive generations. Neurology 38: 207-212.
Chui, H.C., Teng, E.L., Henderson, V.W., and Moy, A.Y. (1985). Clinical subtypes of dementia of the Alzheimer type. Neurology 35: 1544-1550.
Filley, C.M., Kelly, J., and Heaton, R.K. (1986). Neuropsychologic features of early- and late-onset Alzheimer’s Disease. Archives of Neurology 43: 574-576.
Heston, L.L., Mastri, A.R., Anderson, V.E., and White, J. (1981). Dementia of the Alzheimer type: Clinical genetics, natural history and associated conditions. Arch Gen Psychiatry 38: 1085-1090.
Jacobs, D., Sano, M., Marder, K., Bell, K., Bylsma, F., Lafleche, G., Albert, M., Brandt, J., and Stern, Y. (1994). Age at onset of Alzheimer’s Disease: Relation to pattern of cognitive dysfunction and rate of decline. Neurology 44: 1215-1220..
Elizabeth Kensinger '98 (firstname.lastname@example.org) is a resident of Lowell House. She is concentrating in Psychology and Biology through the MBB program.
Back To the Table of Contents
Alive! Fundamental Strategies for Solving the Puzzle of Human Consciousness
Consciousness is the essence of being human. It has allowed us to create
brilliant works of artistry and make extraordinary progress in science
and technology, yet remains as mysterious as ever. Its subjective qualities
continually fascinate us, while its elusiveness has presented one of the
greatest challenges for centuries of scientists and philosophers alike.
We want to know how it is we feel and interpret, and how can we “get into
someone else’s head;” we often wonder if animals have consciousness, or
what it would be like if one were a bat, frog, or hawk. The primary question
resounds: from what does human consciousness arise? Its association through
modern science with neural activity has already answered the “where.”2
In doing so, however, it has prompted not the first steps toward greater
understanding but rather a long leap into the realm of confusion and conjecture.
Descartes wrestled with the mind-body problem, trying to explain how something non-physical could exist inside or be produced by a completely physical, biological entity, only to generate mental frustration: “So serious are the doubts into which I have been thrown... that I can neither put them out of my mind nor see any way of resolving them. It feels as if I have falled unexpectedly into a deep whirlpool which tumbles me around so that I can neither stand on the bottom nor swim up to the top” (Decartes, 1986). He eventually presented the hypothesis of dualism, suggesting that everything in the universe can fit into two categories, physical or mental, and that these exist semi-independently of one another. The brain belongs to the physical realm of existence, while consciousness belongs to the mental; when they meet in space (and perhaps time?), there is what Humphrey calls a “handshake across a metaphysical divide”.
Other philosophers have been reluctant to support dualism and instead acknowledge monism, the idea mind and brain are of the same fundamental material of existence,3 and in some extreme cases, physicalism (or functionalism), that consciousness and the accompanying neural processes are in fact one and the same. Searle comments, “we have to abandon dualism and start with the assumption that consciousness is an ordinary biological phenomenon comparable with growth, digestion, or the secretion of bile” (1995, 60). From the general philosophical idea of monism stems the more modern and scientifically-based idea of neuroreductionism, that consciousness and mental states alike can be reduced, or broken down, to specific neural processes or elements of which they are fundamentally composed. This idea presents a dilemma, however, as modern British philosopher Colin McGinn aptly noted: “Somehow, we feel, the water of the physical brain is turned into the wine of consciousness, but we draw a total blank on the nature of this conversion. Neural transmissions just seem like the wrong kind of materials with which to bring consciousness into the world... The mind-body problem is the problem of understanding how the miracle is wrought”(McGinn, 350). Searle corroborated the problem with a question, “How exactly do neurobiological processes in the brain cause consciousness?” (1995, 60)4 Both suggest that the current search is for a hint as to how subjective mental states arise from or are equivalent to electrical activity in the brain.
It seems that the vast array of speculations, ideas, and hypotheses regarding consciousness stems from the various differences that scientists have in defining or describing the phenomenon. In their book Wet Mind, Kosslyn and Koenig cite one of the major impediments to a productive discussion of consciousness as being a disagreement of what exactly is being discussed . Unfortunately, “It is impossible to fix the referent of the term - one cannot point to consciousness the way one can point to a book or even a brain, so there is no way to resolve disagreements. . . However, people can be conscious of making a decision, being in love, seeing red, having a pain in the lower back, and so forth. And they clearly can distinguish being conscious of these events from not being conscious of them. The fact of consciousness, should not be in doubt; there is something to be explained, not merely explained away”(431).
Some suggest that consciousness is simply “the state of being aware of our thoughts and behaviors” (Bloom, 272). Searle reflected on the very nature of conscious experience and when it arises: “The enormous variety of stimuli that affect us - for example, when we taste wine, look at the sky, smell a rose, listen to a concert - trigger sequences of neurobiological processes that eventually cause unified, well-ordered, coherent, inner, subjective states of awareness or sentience” (1995, 60). Chalmers possesses a similar view, “when you look at this page, you are conscious of it, directly experiencing the images and words as part of your private, mental life... At the same time, you may be feeling some emotions and forming some thoughts. Together such experiences make up consciousness: the subjective, inner life of the mind”(80). Baars, on the other hand, asserts that consciousness is knowing or being aware of external “events.” He uses an operational definition of consciousness that considers people to be conscious of an event if (1) they can say immediately afterwards that they were conscious of it and (2) we can independently verify the accuracy of their report (15). His definition presupposes language and volition (the ability and inclination to verbally describe a conscious experience) and the use of metacognition (to become conscious of the conscious experience, in order to tell of the event). Mountcastle (1975) defined consciousness as awareness of one’s own physical actions. Language is the vehicle by which thought and action are made available to conscious awareness. The common thread to all these definitions of consciousness appears to be either some kind of inner awareness or personal experience, brought about by our ability to sense our external environment. It would seem prudent, then, that we indeed begin with human sensory systems in our attempt to grasp some understanding of an brain-based entity whose vagueness ironically arises from overdefinition.
And yet, by placing a primary focus on sensory input and processing we immediately come upon our first major obstacle: the “binding problem.” Suppose that an individual is walking on a street in a city. He is bombarded by a number of simultaneous stimuli of the senses, including sights, sounds, smells, temperature, and the body’s own activity or movement. Each sense, however, each provides information about a different element of the environment. If we had additional sensory ability, like the chemical sensitivity of dogs’ noses, then we would be able to detect different things from our environment and further enhance our “picture” of the world around us. Additional information provides better interpretation. Sensory organs, like the eyes, do not transmit whole images of the visual domain, but rather features, or different characteristics such as color, contrast, shape, and motion. These fragmented pieces of information are processed in parallel and sent via nerves to the brain.
The sensory data from various domains proceed to the thalamus, a relay station of sorts, that categorizes and sends the information to various locations in the cerebral cortex on the outer surface of the brain. Each sense has its own specific relay apparatus in the thalamus, as well as general location on the cortex. Thus, at each given moment, neurons all across the cortex are activated, based upon various sensory inputs from our immediate environment. The “binding problem,” thus, is that we have no definitive explanation for how these various bits of sensory information are integrated to form a complete whole, a single human perception or image of the world around us. To some neurobiologists, this synthesis of sensory information is what is known as consciousness.
There is no “screen” in the brain. There is no area where sensory information from various modalities could be integrated to form a complete picture of the external world. Further, there is no viewer to observe it even if there were. This has led multiple neurobiologists to explore other non-physical means of sensory integration. Jacobsen suggested that “perhaps there is not such an integrative control center, only a sequence of events entrained by local coupling combined with more widespread coupling between events resulting in a process which is a large-scale pattern of events in space-time” (121). This is not as abstract as it sounds, for according to Jacobsen, “reductionist methodology should, in principle, be able to discover the couplings between events, and to reduce the large-scale pattern of events to small-scale patterns” (121).
This general notion of sensory integration in the form of “patterns” of electrical activity in the brain is not uncommon among modern neurobiologists. In attempting to formulate a neurobiological model of human consciousness (which seems entirely plausible given its definition as a synthesis of sensory input) researchers have focused on neural organization and processing within the cerebral cortex itself as a starting point. Take the cells of the visual cortex, for example, which are arranged in columns corresponding to different elements in a specific part of the visual field. Individual columns become activated upon proper stimulation by impulses arriving from the thalamus. The result is essentially a rhythm of electrical activity coordinated and synchronized by the visual relay station of the thalamus. Neurobiologists Crick and Koch suggest that herein lies consciousness, arising from oscillations in the cerebral cortex which become synchronized as neurons fire 40 times a second; two pieces of information as essentially bound in time by synchronous neural firing (84).
Llinás proposed a mechanism early in 1995 for sensory integration that also has to do with this “rhythm” of neuron firing in the cerebral cortex. He believes that the brain has a scanning system that sweeps across all areas of the cerebral cortex every 12.5 thousandths of a second. The scan manifests in a wave of nerve impulses created by the intralaminar nucleus, a circular grouping of cells in the thalamus. What the electrical scan does is stimulate all of the synchronized, active cells of the cerebral cortex which at that instant are recording specific sensory information. These cells then respond by instantaneously sending signals back to the thalamus, all of the signals at that precise moment in time together reflecting the specific pattern of neural activation based on the precise sensory stimulus.
In the example presented earlier, suppose a moving car is in the visual field of the man walking down the street in the city. Certain columns of cells in the visual cortex are being stimulated, and respond to the electrical scan of the thalamus. As the car moves away and is no longer seen, the cells stop being stimulated and do not respond to the scan. Hence, Llinás suggests, all of the responses received by the thalamus for one scan cycle constitute a single instant of consciousness, with continuity maintained by the sheer speed of the process. “The data from all the body’s senses are could come together not in place. . . but in time, the time of the thalamus’ scanning cycle. Consciousness, by this theory, is the dialogue between the thalamus and the cerebral cortex, as modulated by the senses”(Blakesee, 1).
Hypothesizing solutions to the binding problem address the synthesis of sensory information into a complete, internalized mental picture of the environment, but do not seem to address the very nature of consciousness as others see it. For those like Chalmers, an answer to the binding problem is an answer to one of “the easy problems of consciousness ..”(81). The answers to “easy” questions, suggests Chalmers, elucidate mechanisms of the cognitive system that are associated with consciousness. The “hard problem” remains: how can a vast network of electrical impulses within the brain produce the subjective, personal, indescribable feelings associated with sensory experience? More than a handful of theorists refer to the “phenomenon” of being moved by the intense red of a rose, the feeling of pain upon pinching oneself, or hearing the twang of a harp. Is this inner mind the real “mystery of consciousness”(Searle, 1995, 60) and not merely sensory integration?
For many the answer is yes, but the problem is now more difficult than we imagined. Neuroscience and neuroreductionism provide physical or structural explanations by virtue of their nature and may not best be suited to explaining the subjective experience that we seek to understand. Hypothetically, even if scientists could functionally understand every neural or cognitive process associated with consciousness, we would still not know how or why these functions are accompanied by subjective experience. Does that mean we have no choice but to accept the dualism that Descartes proposed centuries ago? Not necessarily, according to Searle, who asserts that our greatest impediment to properly approaching consciousness is our inability to recognize “non-event causation,” and that cause (brain processes) and effect (consciousness) do not necessarily have to be two different things. Brain processes could “cause” consciousness in the sense that consciousness “is itself a feature of the brain,” similar to the solidity of wood being a feature of wood that is caused by the behavior of its molecules (60). It is this fine distinction that allows us to trespass on the great explanatory divide we faced earlier: that is, how subjective states of sentience arise from physical processes in the brain. Neurobiology can explain sentience, but not in the direct, functional way that we had hoped.
Hence, we are left to conclude one of three things: 1) consciousness is an emergent property of the whole system, a property that could not be deduced either from the properties of the parts studied in isolation, or from the properties of parts that could be predicted to come into play as a result of interactions between the parts5; 2) consciousness itself is a nonreducible fundamental entity accompanied by constant activity of the brain’s physical processes; or 3) consciousness is neither an emergent property nor a fundamental entity,6 but somehow directly caused by neural elements and processes in a way that is currently beyond our realm of understanding. I prefer to focus on the first two conclusions. Thefirst notion does not imply reductionism because emergent properties are not equatable to the systems or processes from which they arise. Rather, the notion suggests that there is a specific type of causality that does not involve discrete events occurring one before the other. The second idea does not assume dualism, because dualism would assert that consciousness lies outside the realm of that which is explainable by science. Rather, it asserts that when existing, fundamental laws cannot explain a common entity, new laws must be devised to do so. Neither one of our two conclusions, I believe, necessarily exists exclusively of the other: an emergent property can be one that is fundamental (explainable by but unable to be reduced a specific system), and a fundamental entity can be one that (unexpectedly) arises from a uniquely organized system.
Perhaps the best way to approach this modified view of consciousness is to organize a framework or set of guidelines for any proposed theory. Kosslyn and Koenig have done that by presenting a specific set of criteria for an adequate theory of consciousness in their book, Wet Mind. Kosslyn and Koenig refer to consciousness as the “phenomology of experience”, and as I suggested earlier, how easy it is to skirt the issue or “conflate the information-processing or neural correlates of consciousness with the phenomenological texture of the experience itself”. They focus on the nature of the private “feel” of humans without attempting to characterize in detail the various phenomena. Rather, their consciousness differs from brain states. They propose the following requirements for a proper theory on consciousness, as I have summarized and commented upon below:
1) Nonreducability: Consciousness and brain events are completely different in nature, and that “phenomenological experience cannot be described in terms of ion flows, synaptic connections, etc.”(432). They provide the analogy of consciousness being the light produced by a hot filament in a vacuum. The specific events that cause the light can explain how it physically arises, but cannot provide an accurate description as to the phenomenon itself. Similarly, consciousness cannot be equated with a series of events in the human brain.
This first criterion satisfies my view of consciousness as either being an emergent property (essentially a feature of a system of neural processes) , a fundamental entity (inexplicable with existing physical laws or properties), or both. Nonreducability, however, may be harmful if incorrectly perceived as upholding the idea of vitalism - that the workings of the human mind will never yield to the reductionism of modern science. Vitalism is extremist because it ignores the possibility of discovery: perhaps we are only limited by what we have already learned, or our present scientific capabilities. Asserting that consciousness is “nonreducible” to specific events in the brain does not intrinsically posit that consciousness cannot be understood in a more fundamental way, as a feature of those very events or an entity deserving of new fundamental laws altogether.
The light analogy that the authors present is fascinating, (essentially portraying light as an emergent property that is nonreducible to the physical effects that create it), but some might argue it in a different way: At one point in time visible light was largely believed to be non-reducible, but the advent of the prism clearly revealed that visible light is composed of electromagnetic waves of different wavelengths. Einstein further theorized that light is comprised of photons (or packets of energy) moving as electromagnetic waves - hence, an explanation for light’s seemingly unresolvable dualistic wave-like and particle-like properties. These advances in our understanding of light came with time and scientific achievement. Hence, is it advisable to assume that consciousness cannot be reduced? Well, yes. . . not reduced to physical events in the brain, that is. Doing so would not be reduction in the same manner we reduced light (to parts with similar fundamental characteristics) because neural, electrical activity possesses characteristically different properties than consciousness and simply cannot comprise the ingredients to our subjective and afferent inner mind7. The intrinsic nature of consciousness simply defies reduction to brain activity.
With the purpose of making an even more important clarification, there is room to even further expand upon the authors’ “light” analogy. Assuming the supposition that light is fundamental and cannot be reduced would not have any impact on the admission that there are actually multiple ”causes” (defined as ways of producing light, the hot filament in a vacuum merely being one of them). Similarly, the acceptance of consciousness as nonreducible does not ipso facto exclude the idea that consciousness results or emerges from some system or process. Because nonreducibility does not deny the existence of causation, both concepts must be treated separately. In this way, those who uphold belief in nonreducibility of human consciousness can ponder whether animals are conscious, the possibility of consciousness in non-living entities, etc., without facing any sort of inherent contradiction in ideology. Granting nonreducibility, science’s foremost pursuit should be to discover the specific relationship between human neurophysiological activity and human consciousness, (whether be simply associative, that of emergent property and system, or even that of direct event-related cause and effect), with the hope that something fundamental about the nature of the mysterious entity might be revealed in the process.
2) Unique Role of Consciousness: Consciousness can be non-functional, epiphenomenal. Like steam emerging from a pot of boiling water, consciousness may simply arise out of brain processes with specific functions. Alternatively, a theory can assert that consciousness has some definitive function, which is far more difficult to explain. This is because, as a corollary to point one, any function of consciousness must not be able to be accomplished by brain events. From the evolutionary viewpoint as well, it would seem unclear why consciousness would emerge and be maintained in the event its functions can easily be carried out by brain processes.
3) Selectivity: Its interesting to note that only particular mental activities are accompanied by some sort of conscious experience, and a theory of consciousness should explain why. For example, Kosslyn and Koenig remind us that we have no idea how we construct images of objects a little at a time, how we reach an instantaneous decision, or how we identify something we have seen before. We simply don’t know or aren’t aware of the sequences of events behind many specific mental outcomes.
4)Association with brain states: We do know for certain that consciousness arises from activity in the brain, especially after years of research on changing brain states and the effect on consciousness. Additionally, consciousness is altered when one takes drugs, and it is a fact that electrical activity in the brain always precedes an individual’s consciousness of a certain stimulus. Thus, brain processes are requisite for consciousness although we cannot equate the explanation of one with the other, or explain why there is a requisite relationship.
5)Impact on Brain States: This is obviously the most controversial element of the authors’ consciousness construct. Kosslyn and Koenig reason that in a theory where consciousness is functional, it would affect behavior in some way by affecting the brain. Thus brain activity and consciousness effect each other. They admit that this characteristic is a terribly challenging one to incorporate into a theory; indeed, although consciousness is not a physical event (not that we know of), it arises from them and even has the capability to alter them. Hence the puzzle remains.
Kosslyn presents a speculative theory to demonstrate how a model of consciousness can comply with five, seemingly stringent and disparate tenets. Under the “Parity Theory,” as it is known, consciousness essentially serves as a mechanism for determining whether the brain is functioning properly or not. The idea arises out of a simple computer metaphor. Computers store information in bytes, each of which are composed of eight bits. Because computers use binary code, each bit can signify either a 0 or a 1, and the eight bits will together identify something in particular, such as a word or a symbol. Often, however, this will be done with only seven bits, and the eighth becomes what is known as a “parity bit.” Depending on the machine, this last bit will be either a 0 or a 1 to make the sum of all the bits odd or even. By essentially labeling bytes in this way, the computer can readily detect whether some error has occurred in its storage of information. By analogy, Kosslyn suggests that consciousness is a parity check, an indicator of proper neural function. “According to this conception, consciousness is an interaction among physical energies produced by the brain, which provides a sign that neural activity in diverse locations is mutually consistent”(436).
The theory seems to present a possible function for consciousness, but has neither provided a causal mechanism nor stated anything definitive about the phenomenon and its properties. Kosslyn proceeds to systematically and logically fill in the blanks. Neural discharges in different brain areas that process the same stimulus oscillate in phase together at 40 Hz (Gray and Singer, 1989). These oscillations may in fact be responsible for associating different representations in the various regions of the brain, (as Crick and Koch, 1995, have suggested as well). For example, take the visual stimulus of a large, blue automobile moving to the right. Each of the automobile’s characteristics has a specific representation in a specific brain area, and if all the stimulated neurons oscillate in firing in the same way, “this could indicate that representations should be conjoined in associative memory” (438).
According to Parity Theory, consciousness has the same relation to these brain events as a chord does to individual notes that are played on a guitar. Kosslyn finally proposes that “consciousness arises from the interaction of the electromagnetic rhythms set up in individual brain loci, but cannot be reduced to the individual brain events any more than a chord can be reduced to individual notes”(438). Hence, we have satisfied rule number one. Consciousness also has the unique function of signifying discord, if certain neural processes do not function together properly. A neural process cannot monitor another neural process, because some mechanism would always be needed to check the monitor mechanism. Thus, “consciousness has a unique role; it performs a function that cannot be replaced by a brain process”(439).
In terms of selectivity, Kosslyn refers to the speed of the neurological process as being the deciding factor in whether it will be consciously detected or not. He bases this assertion on a study that demonstrated that conscious experiences always lag behind the brain events we presume to evoke them (Libet ,1987; Libet, 1967). Thus, a delay period is necessary before each conscious experience, and we are able to experience only those processes that are relatively slow. In evolutionary terms, functions essential (such as instantaneous decision-making) to virtually all multicellular organisms will proceed fast and remain undetected, because of millions of years of fine-tuning and improvements in efficiency. Those abilities or neurological processes that are relatively new and unique to humans are the slower ones that we consciously experience. The brain state simply has to exist long enough for us to become conscious of the result. As Kosslyn comments, “we are conscious of the relatively slow processes that coordinate perceptual, memory, and motor events during reasoning, a relatively recent development on the evolutionary scale”(440).
The fourth rule requires association with brain states. Indeed, the theory proposes that consciousness cannot exist without electromagnetic rhythms in different parts of the brain. Lastly, there must necessarily be an impact on brain states. Continuing the guitar analogy, a dissonant chord results from a note being out of tune. The Parity Theory asserts that dissonance arises if different portions of the brain are not oscillating compatibly. When this happens, the brain areas that produce the electromagnetic rhythms are affected and hence they do not process information normally. Thus, consciousness seems fragmented or distorted. Drugs can have a similar effect by disrupting rhythms directly, producing abnormal rhythmic interactions. The most common result is a “reset” or starting over.
Kosslyn’s conception of consciousness is truly unique, for he does not stop at proposing a solution to the binding problem of sensory integration, but satisfies his own five criteria for a theory of consciousness by explaining the phenomenon without neuroreduction or any sort of equation to a neural state. His theory, however, is not free of problems. First, there is a point of contradiction between his Parity Theory and one of his five consciousness criterion. Kosslyn, without using the actual buzzwords, defines consciousness as a non-reducible emergent property or feature of the system of oscillating neural firing in the cortex; therefore neural processes effectively cause consciousness and subjective experience. The implication of the principle of emergent property, however, is that neural processes cannot be influenced in any way by the emergent property. This conflicts with Kosslyn’s consciousness construct (which presents consciousness as having the ability to affect brain states) and hence his Parity Theory.
Second, there is a complete disregard of sensory experience in his notion of consciousness. It seems as if he perceives consciousness as simply being awake or aware of external stimuli or certain, slow-moving thought patterns. This is problematic because I indicated earlier that most researchers in the field define the problem of consciousness in terms of the subjective, inner responses that accompany sensory processing in the brain. The feeling one gets from hearing a certain piece of music, or eating a piece of chocolate cannot readily be explained by “Parity Theory,” causally or otherwise. If the function of consciousness is simply to be a check on normal brain function, then why do we experience such elaborate, rich and diverse inner subjective states? Surely, having personal feelings couldn’t have anything to do with maintaining proper function of the neural system. How would the private, subjective feeling of pleasure upon eating cake be an indicator of normal neural function? Furthermore, where would one draw the line as to what is an indicator of normality and what is irregular? Kosslyn never addresses these notions. Perhaps the problem here is again one of the definition of consciousness, and in that event, the preceding questions shouldn’t even be asked. In any event, it seems that satisfying the five elements of Kosslyn’s consciousness construct is not enough: a theory on consciousness must inherently address private, subjective, sensory experience.
At this juncture, I feel as if I’m at a loss as to how I could proceed any further than I have already come. Until science reveals something more penetrating about the relationship between brain processes and consciousness (the neurobiologist’s “holy grail”) we can only cross the explanatory gap by means of one of the three ways I mentioned earlier: that consciousness is either an emergent property or feature of a neural system (epiphenomenalism), and/or it is a fundamental entity incapable of reduction to physical events in the brain(nonreductionism/fundamentalism), or it is reducible in a way that science cannot yet explain. In any case, we can identify a “cause” for consciousness, simply using our earlier loose interpretation of the term. Each of the first two schools of thought presently remains incomplete: most epiphenomenalists cannot explain how mental processes can arise from the brain’s physical activity, but maintain that emergent properties only appear when matter is organized a certain way; fundamentalists have yet to present any substantive hypothesis for what the “new laws” of consciousness might be. In response to the question of whether animals have minds, epiphenomenalists would suggest that different mental phenomena arise as the brain gets more and more complex. Yet it is difficult for them to explain how two organisms with similar brain size and complexity, and similar nervous system organization, have such disparate abilities to learn in laboratory situations (Jacobsen, 120). Fundamentalists, interestingly, would not be able to answer the question at all solely using the implications of their theory that consciousness is completely irreducible to anything. They have, as of yet, placed no constraints on the exact nature of its existence other than an association with neural processes.
That is why, in part, Jacobson, aligns his comments on consciousness with my third option, (which I discarded earlier), or the theory of materialists who regard their concept of the mind as fully deducible from the laws of chemistry and physics, “if the necessary data were in our possession and if we were able to understand their meaning”(120). He cautions the scientist-philosopher in pursuit of the seemingly unsolvable question of the existence of the mind and the inner private, subjective states of humans: “We are not exempt from the limitations imposed by the process of evolution of the human brain. Our mental capacities are only what they now are by virtue of the contingencies of survival of the fittest of a multiplicity of forms. There is no basis for wishful thinking that the human brain is endowed with powers to understand everything that may ever be discovered”(120).
So is that it? Shall we give up on the idea that consciousness can be precisely defined or broken down? Nonsense. One of the most remarkable characteristics of the pursuit of scientific knowledge by humans is that we don’t know the inherent limits or boundaries to our understanding. Some suggest the irony in the notion that we have been able to elucidate so many complex phenomena in the world that surrounds us, yet still remain bereft of any convincing knowledge of that mysterious and intimately familiar phenomenon that lies within our own brains. Perhaps that is what continues to make the search so desperate and relentless.
Baars, Bernard J. (1988). A Cognitive Theory of Consciousness. (Cambridge, England: Cambridge University Press).
Blakeslee, Sandra. (March 21, 1995). Behind the Veil of Thought: A New Theory of Consciousness and How the Brain Might Work. Science Times, The New York Times, C1, C10.
Chalmers, David J. (1995). The Puzzle of Conscious Experience. Scientific American December: 80-86.
Crick, Francis and Koch, Christof. (1995). Why Neuroscience May Be Able to Explain Consciousness. Scientific American December: 84-5.
Descartes, René. (1986). Meditations on First Philosophy. trans. John Cottingham. (Cambridge, England: Cambridge University Press).
Gray, C.M., and Singer, W. (1989). Stimulus-specific Neuronal Oscillations in Orientation Columns of Cat Visual Cortex. Proc. Natl. Acad. Sci. USA 86: 1698-1702.
Humphrey, Nicholas. (1993). A History of the Mind: Evolution and the Birth of Consciousness. (New York, NY: Harper Perennial Publishers).
Jacobsen, Marcus. (1993). Foundations of Neuroscience. (New York, NY: Premium Press).
Kosslyn, Stephen M. and Koenig, O. (1992). Wet Mind. (New York, NY: The Free Press Publishers).
Libet, B. (1987). Consciousness: Conscious, Subjective Experience. In Encyclopedia of Neuroscience. G. Adelman, ed. (Boston, MA: Birkhauser Press).
Libet, B., Alberts, W. W., Wright, E.W., and Feinstein, B. (1967). Response of human somatosensory cortex to stimuli below threshold for conscious sensation. Science 158: 1597-1600.
Longuet-Higgens, Christopher. (1992). Vagaries of Thought. Nature 360: 117.
McGinn, Colin. (1989). Can We Solve the Mind-Body Problem? Mind 98: 349-66.
Searle, John R. (1992). The Rediscovery of the Mind: Representation and Mind. H. Putnam and N. Block, series eds. (Cambridge, MA: The MIT Press).
Searle, John R. (1995). The Mystery of Consciousness. The New York Review of Books 152, 17-18: 60-66/54-61.
Wolff, Simon P. (1992). A History of the Mind. The Lancet 340: 95.
Wolpert, Lewis. (1992). A Consciousness of Ourselves. New Scientist 135: 45.
1. New Scientist, September 26, 1992, v135, n1840, 45.
2. Actually, the idea that all sensations are united somewhere in the brain, the so-called “sensus communis,” originated sometime in the fourth and fifth centuries with St. Augustine (354-430 AD)(Jacobsen, 19).
3. Reussel (1921, 1927) combined the two extremes in what has been called “double-aspect monism,” asserting that mental events and neural events are simply two refletions of some deeper underlying reality.
4. His use of the word “cause” would for now be better as “accompanied,” simply because causality between brain activity and consciousness has never been proven. By virtue of its manifestation in the brain, consciousness has by too many been explained as “caused” by the microlevels of neurons, synapses, neuron columns, and cell assemblies that lie at the heart of the brain.
5. Not because it is difficult to do so in practice, but impossible to do so in principle(Jacobsen 119).
6. According to Chalmers, a fundamental entity possesses its own fundamental laws that must not interfere with those of the physical world. Rather, “the laws serve as a bridge, specifying how experience depends upon underlying physical processes.” A complete theory will have two components: “physical laws,” telling us about the behavior of physical systems, and “psychophysical laws,” explaining how those systems are related to conscious experience.
7. Perhaps one day we may find that consciousness is indeed comprised of parts with similar fundamental characteristics, but we haven’t even finished defining those characteristics yet.
Sameer Chopra '97(email@example.com) of Lowell House, is an AB/AM candidate in Biological Anthropology and Neuroscience. He is currently working on a project to determine the evolutionary significance of interspecific cortical neuron number variation in mammals.
Back To the Table of Contents
Analogies and Categories of Consciousness Consciousness as Serial Emulation
by the PDP Brain:
Dennett's Idea, Churchland's Rebuttal, the Real Debate, and a Resolution
Robin S. Goldstein
I. Consciousness as serial emulation: The question at hand
In his 1995 book The Engine of Reason, the Seat of the Soul (hereafter Engine), philosopher Paul M. Churchland enters the recently-rekindled debate over what type of process makes up consciousness. He takes issue with aspects of Dennett's explanation of consciousness. Dennett presented his ideas about consciousness in his 1992 work Consciousness Explained (hereafter CE). In sum, Dennett proposes that perceived consciousness consists in a virtual serial machine implemented on top of the parallel-distributed-processing (PDP) mechanism of the brain, and Churchland finds this notion “deeply confused” (265).
Here I will first explain the differences between PDP and serial processing and define emulation (section II). Next, I will set out the theory of Dennett (section III) and the criticism and alternate theory proposed by Churchland (section IV). I will then investigate the real debate and its differences from the superficial dispute (section V), including an analysis of how Dennett’s argument holds up to Churchland’s seven criteria for a theory of consciousness. I find that while Churchland partially attacks a slightly different hypothetical argument over whether true serial emulation occurs and whether that matters, Churchland and Dennett only substantively disagree on whether language is the key aspect of our consciousness, and thus on whether humans and animals enjoy broadly the same kind of consciousness.
Finally, in section VI, I will set out my own opinion on the debate: First of all, the speed of PDP networks makes emulation of a serial machine much faster than the converse, so it is quite possible that such emulation does occur. I find that Dennett’s virtual-serial-machine theory in itself stands up to Churchland’s attack. But the second part of Dennett’s theory—the emphasis on linguistic value and thus the “severely truncated” nature of animal consciousness—does not withstand such scrutinization unscathed. Since consciousness, at our current level of understanding, is a subjective phenomenon anyway, linguistic knowledge does radically change the kind of consciousness any human enjoys, but it does not necessarily follow that higher significance—difference in level rather than kind—can be judged from the capacity for language: Broad distinctions of categories of consciousness do not carry much weight.
II. Preliminary Explanations
In order to analyze the arguments of Dennett and Churchland, I will first explain the differences between Parallel Distributed Processing and Serial Processing, and then I will mention computer-style emulation.
§ 1. Parallel Distributed Processing vs. Serial Processing
Numerous complex definitions of Parallel Distributed Processing now circulate, but basic definitions of PDP do not make up the main part of CE upon which Dennett and Churchland disagree. It is thus fair enough to take Churchland’s rough and simple explanation, found on pages 11–15 of Engine, as our operative definition in this case. Parallel Distributed Processing is, in a sentence, “transforming one pattern into another by passing it through a large configuration of synaptic connections” (Churchland, 11). Below I have sketched out a comparison of the traits of parallel-processing versus serial-processing machines:
§ Table 1.[PDF version only]
PDP systems may have been selected by evolution for humans and “throughout the animal kingdom” (Churchland, 11) because of their greater plasticity, accommodation for error due to imperfections in biological life, and different strengths and weaknesses in types of computations. But it is not hard to see why current desktop computers are serial machines. Serial machines are capable of being explicitly instructed to behave in a certain way, while parallel-distributed-processing machines must (generally) be taught certain patterns of behavior through adjustment of synaptic weights. (One such method that exists for PDP neural-network models emulated on serial machines is called back propagation, or backprop, by which the system of synaptic connections is slowly fine-tuned to give a certain kind of output from a certain kind of input. But it still remains the case that serial systems are vastly easier to program than parallel ones.) Of course, it is accepted by both Dennett and Churchland that there is no external, thinking agent willfully adjusting synaptic weights of the brain’s massive PDP network.
§ 2. Emulation
A Turing Machine (a label which all serial desktop computers fit), because of its nature, can emulate any other kind of machine. For modern serial computers, emulating a different processing method means writing a program in the kind of code native to the computer (in this case, serial code) that, through software, has a front-end (input-output) behavior of the emulated machine. This is commonly done with different varieties of serial computer architecture (for example, the emulation of 680x0-type code for PowerPC machines which have a different native instruction set than 680x0 machines), and it is also (less commonly) done with entire processing methods (such as parallel processing emulators implemented on PC-compatibles). Speed is the main bottleneck of emulation, of course, as can be seen above in Table 1. Thus the only problem with the emulation of a parallel system B by a serial one A is that the emulated B is generally so much slower than B that its usefulness is drastically damaged. The converse—the emulation of a serial system by a parallel one—is also generally accepted by both philosophers as at least possible.
III. Dennett’s serial-emulation theory as set out in Consciousness Explained
One of the two most important claims in Dennett’s CE can be summarized as follows:
Human consciousness . . . can best be understood as the operation of a “von Neumannesque” virtual machine implemented in the parallel architecture of a brain that was not designed for any such activities. The powers of this virtual machine vastly enhance the powers of the organic hardware on which it runs, but at the same time many of its most curious features, and especially its limitations, can be explained as the byproducts of the kludges that make possible this curious but effective reuse of an existing organ for novel purposes [Dennett 210; his italics].
Dennett claims that human consciousness is “realized in the operation of a virtual machine created by memes in the brain” (210). The term memes, coined by Richard Dawkins in his popular 1976 book The Selfish Gene, refers to a sociocultural analogue to genes. Memes can be cultural ideas, phrases, thought-processes or ways of doing things, transmitted through communication with a replication, mutation and selection speed higher than that of genes but with a copying fidelity significantly lower. Specific human languages (though likely based upon an innate instinct) fall under the meme category. The memes, claims Dennett, create the brain’s equivalent of a special-purpose machine, implemented on top of its existing architecture but more specifically geared toward a certain purpose.
In explaining the implementation of consciousness on top of the brain’s PDP architecture, Dennett draws a parallel with computer software: software is “made of rules rather than wires” (Dennett, 211) just as he claims consciousness is made of meme-rules on top of a synaptic network.
Dennett supports his claim with two basic arguments:
(1) The human mind is the inspiration for the Turing and von Neumann machines, the basic serial architecture for computers. (2) Computer programmers find it dramatically easier to code a serial von Neumann machine than it is to program the parallel computers currently in development.
He continues that our mind-processes in solving computational problems are superficially serial: there is a conscious window, the “von Neumann bottleneck” (214), in which the current calculation or idea is processed, and, as we can tell by simple introspection, we think in an essentially sequential way.
Much later, in the final chapter of his book, Dennett extends his theory to include implications that are much broader in scope: “Language … plays an enormous role in the structuring of a human mind, and the mind of a creature lacking language … should not be supposed to be structured in these ways” (Dennett, 447). Dennett concludes that, although he claims to believe consciousness is not a black-or-white property either fully possessed or not possessed by any creature, the presence of language is a particularly important feature of the kind of consciousness we know and speak of: “The sort of consciousness such [languageless] animals enjoy is dramatically truncated, compared to ours” (Dennett, 447). As we shall see, this is where Churchland’s biggest disagreement will come in.
IV. The dispute as set out by Churchland’s response
Churchland vehemently opposes Dennett’s theory: “I think [Dennett’s view of consciousness] is deeply confused … its central mistake … embodies a profound irony” (265).
§ 1. Argument against the “Virtual Serial Machine” model
In the first aspect of his argument, Churchland takes issue with the idea of the PDP brain working in emulation mode in the formation of consciousness. Churchland claims that Dennett does not mention at all the theory of recurrent PDP networks—the idea that descending feedback is a mechanism within the system for adjusting weights, much as a modern serial-emulated neural-network system adjusts synaptic weights through externally-computed back-propagation (based on how suitable an output is to the external homeostatic mechanism). He claims that this offers a better explanation of the “temporal dimension” of consciousness.
Churchland’s next criticism is that input-output behaviors do not imply anything about the process going on to achieve them. That is, just because the end product of a computational process—in this case, the output “Joycean stream” (as Dennett terms it) that we seem to associate with consciousness does not imply anything about the process gone through to achieve such an output. Even if the input-output behavior of what we term “consciousness” seems to have emulated a serial process, Churchland claims, emulation does not require that the PDP neurons actually engage in discrete-state, serial processes. Churchland asserts that Dennett must find genuinely serial procedures within the biological brain to support his theory.
§ 2. Argument against language as the foremost aspect of consciousness
In the second aspect of his argument, Churchland construes Dennett’s theory as a misguided fall back to the disproved Aristotelian prototype of conscious human cognition, which he claims is a “prototype of language-like activity” (Churchland, 265). Churchland points out that PDP has come to dominate modern neuroscientific theory here in the closing decade of the twentieth century:
(4.) Dennett (1) attempts to pull that failed [Aristotelian language-like] prototype back into the spotlight, (2) makes it the model for human consciousness, (3) gives parallel distributed processing a cursory pat on the back for being able to simulate a “virtual instance” of the old linguistic prototype, and (4) deals with his theory’s inability to account for consciousness in nonlinguistic creatures by denying that they have anything like human consciousness at all (Churchland, 266).
Finally, Churchland—in what I shall find to be the heart of his disagreement with Dennett—attacks the implication of Dennett’s theory that animals could not possibly enjoy the same kind of consciousness that humans enjoy. With suitably recurrent PDP networks, and the PDP networks alone, accounting for consciousness, Churchland concludes, not only are nonlinguistic kinds of consciousness (such as mental imagery and auditory consciousness) slighted, but nonlinguistic animals are slighted in being labeled as having a much lower form of consciousness than humans.
V. The real disagreement
In the following section, I will first show that Dennett’s theory does not contradict any of Churchland’s seven salient dimensions of consciousness. Next I will investigate why Churchland demands not only that it stand up to his seven dimensions but that it also explain them—I believe Churchland is looking for a different category of theory than Dennett provides. I will then look further into the heart of the disagreement—the real debate. I will look at the fact that while both claim not to believe in consciousness as a binary trait, Dennett comes dangerously close to accepting all-or-nothing consciousness, and this is where Churchland likely begins to take serious offense. Continuing on this route, I shall ultimately find the question of how similar animal (and nonlinguistic human) consciousness is to linguistically-developed human consciousness to be at the heart of the debate. The other arguments superficially spanning Churchland’s dispute—mostly arguments over whether the brain’s PDP system could be acting in emulation of a serial machine, the superficial center of the dispute—are not directly against Dennett’s position.
§ 1. Does Dennett’s theory stand up to Churchland’s seven requirements?
Churchland, citing his seven requirements upon a theory of consciousness listed in Chapter 8 of Engine, calls Dennett’s theory inadequate in its subsequent explanatory performance of those seven. But, upon investigation of them, Dennett’s theory would agree with five and only possibly (but not directly) disagree with parts of two:
“1. Consciousness involves short-term memory.” A system in serial emulation could certainly involve short-term memory, as serial, Turing-like machines involve a window of computation holding a sort of short-term memory of the data being currently dealt with. The temporal aspect of consciousness is actually emphasized in Dennett’s theory. Clearly the virtual serial machine explanation would support this claim.
“2. Consciousness is independent of sensory inputs.” A serial, linguistic-like stream of information that had been stored in the brain could exist without any serial inputs. A man sitting alone in a dark room with little sensory input of any kind would easily be able to carry on a linguistic stream with a virtual serial machine—the idea of the machine itself says nothing about requiring sensory input at all times to function.
“3. Consciousness displays steerable attention.” Again, the serial-emulation explanation is all about steerable attention: the single focus point in consciousness, much like the window of calculation which is a section of tape in a Turing machine, is the most compelling support of the serial theory. A linguistic stream implemented over virtual serial architecture could be “steered.”
“4. Consciousness has the capacity for alternative interpretations of complex or ambiguous data.” This is perhaps Churchland’s most viable contention against Dennett’s theory; but it is not out of the range of possibility that an emulated serial machine have a capacity for alternative interpretations—the linguistic stream can certainly double back over itself or retrace its steps and take a different path. This criterion, while not explained by Dennett’s theory, is not in direct conflict with it.
"5. Consciousness disappears in deep sleep;
"6. Consciousness reappears in dreaming, at least in muted or disjointed form.” While Dennett’s serial-emulation theory does not speak of this phenomenon directly, the appearance and reappearance of the emulation mode is, once again, certainly not ruled out by Dennett’s theory. There is no logical reason why a virtual serial machine might not be implemented during sleep, and there is no logical reason why the virtual serial machine might not be switched off during certain modes of sleep, much as a computer software program does not need run all the time.
“7. Consciousness harbors the contents of the several basic sensory modalities within a single unified experience.” This final tenet of Churchland’s theory hints at the fact that the center of the real disagreement between Dennett and Churchland is over whether the linguistic aspect of consciousness is of paramount importance. However, we must keep in mind that a serial machine can process any type of information, not just linguistic-like information. While Dennett puts the most weight on the linguistic aspect for our type of consciousness, he does not deny that the several basic sensory modalities do seem to converge to a single unified experience that makes up that virtual serial stream. So even Churchland’s seventh tenet does not directly oppose Dennett or dive directly into the genuine disagreement.
In summary of the above analysis, I find Churchland’s #1 and #3 in direct support of Dennett’s explanation; I find #2, #5, and #6 not at all in opposition to the explanation; and I find #4 and #7 hinting at opposition to, but still not in direct conflict with, Dennett’s explanation. Thus no part of Churchland’s theory suggests why a virtual serial machine would not be possible. Where the two really disagree, as I stated above and will explain again below, is in which part of the stream of consciousness is most important.
§ 2. What kind of a “theory of consciousness” is Dennett’s and how does it compare with Churchland’s demand upon it?
Although above I have found that Churchland’s seven salient dimensions of human consciousness do not directly oppose Dennett’s theory, let us look more closely at the way in which Churchland finds Dennett’s theory lacking. He states that the theory “offers no account of any of [the seven], let alone a unitary account of all seven” (Churchland, 259). He also deems the theory “inadequate in its subsequent explanatory performance” (269).
But why does Churchland demand that Dennett’s theory perform in a way such that it explains or offers an account of his seven dimensions? The theory merely proposes that consciousness is made up by the emulation of a virtual serial machine. I suggest that Dennett’s theory is not in the same category as the kind of theory Churchland wants, and it is because of its different category rather than an inherent flaw (plus the ideological dispute mentioned below) that Churchland finds it lacking. While Dennett’s theory is a “framework” (what I will call theory type T1), Churchland demands that it behave like an “explanation” (what I will call theory type T2):
(1) Theory type T1 of phenomenon P must explain all aspects P1 , P2 , P3 , … , Pn of P in order to have validity, because it claims to exist as an explanatory device. Examples of theories of type T1 are physical theories accompanied by equations, i.e. Special Relativity, etc.
(2) Theory type T2 of phenomenon P suggests a mechanism for the way P works without having to, or trying to, explain why all characteristics P1 , P2 , P3 , … , Pn of P occur. It is not a requirement of theory T2 that it have such explanatory power of all P1 , P2 , P3 , … … , Pn in order to have validity. An example of a theory of type T2 would be the scientific explanation of the food web—it explains the way a certain life cycle works, but it does not need explain all salient aspects of animals’ eating habits and eating behaviors in order to be valid and true.
(3) Of course, both theory type T1 and theory type T2 must not contradict any characteristic Pk of phenomenon P.
While Dennett’s theory is of type T2 , Churchland demands type T1 behavior from the theory, and, because of what he sees as inadequate performance in the defense of the requirements above of a type T1 theory, he declares it invalid.
I have shown above in § 1 that Dennett’s theory meets requirement (3) above—it does not directly contradict any dimension of consciousness that Churchland brings up. Churchland demands that Dennett’s theory not only not contradict his seven dimensions, but that it explain and account for them. Thus Churchland’s criticism of the serial-emulation part of Dennett’s theory falls short.
§ 3. Dennett’s waffling stance over whether consciousness is a binary trait
Dennett, as shown in his “Multiple Drafts Theory” argued elsewhere in CE and referred to in his argument for serial emulation, does not want to commit to the position that consciousness is a binary trait—that is, that it either exists or it does not in a given being. Neither philosopher is willing to officially assume that position. As Dennett clearly states later in Chapter 14 of his book, he dismisses the question of the “assumption that consciousness is a special all-or-nothing property that sunders the universe into two vastly different categories: the things that have it … and the things that lack it” (Dennett, 447).
So Dennett does not want to grant that beings either “have” or “lack” consciousness. But elsewhere in his language, he is willing to come dangerously close to admitting the all-or-nothing process he denies: “The sort of consciousness [nonlinguistic] animals enjoy is dramatically truncated, compared to ours” (447). His description here is befuddling: if he is talking about the content of conscious thought, rather than consciousness itself, there is no contradiction; but he goes on, using Oliver Sacks’ writings as evidence, to conclude the same thing about nonlinguistic humans: “Without a natural language, a deaf-mute’s mind is terribly stunted” (448). In order to remain consistent with his previous assertion about the importance of the Joycean linguistic-ness of the stream, Dennett accepts that drastically different (and, as implied by his language—“stunted”—lesser) kinds of consciousness exist for animals and nonlinguistic humans—if he did not, his earlier foundation would crumble. But in extending the linguistic-importance idea to its natural conclusion as he does, he slights animal consciousness and all but comes out and argues for that binary switch of consciousness, the binary switch whose existence he is so scared to admit. And it is this extension and virtual acceptance of that dangerous binary-switch idea that, I believe, leads Churchland to reject Dennett’s entire idea of seriality. Perhaps if Dennett had been clearer about whether he was referring to the “stunted” nature of conscious thoughts, rather than the whole of consciousness, Churchland would not have taken such offense and the dispute would reveal itself as more obviously trivial.
§ 4. The real disagreement: Linguistic value and animal consciousness
Above I have shown (1) that Dennett’s theory stands up to Churchland’s seven criteria of consciousness, (2) that for Churchland to want more from Dennett’s theory is unreasonable because it is a theory of type T2, and (3) that Dennett gets to close to making consciousness a binary trait and that Churchland thus rejects Dennett’s entire theory because he disagrees with the unfairness to nonhuman animals that results from the weight on linguistic value. What naturally follows from these conclusions is that where the substantive debate truly lies is in what aspect of consciousness is most essential to our definition. Dennett says it is the linguistic aspect of consciousness, the “Joycean stream,” while Churchland disagrees: “Dennett’s account of consciousness is skewed in favor of a tiny subset of the contents of consciousness: those that are broadly language-like” (Churchland, 269). Churchland feels that the mechanism of non-linguistic consciousness could be explained non-serially.
Although, contrary to Churchland’s use of rhetoric, Dennett does acknowledge that consciousness includes musical, visual, sensory, tactile, and motor images, Churchland is right in the sense that Dennett weighs linguistic consciousness very highly. Dennett perhaps could have separated his theory into two different theories, with the serial emulation part standing alone from the drastic emphasis on linguistic value in consciousness. But because it is presented as a package deal, Churchland’s deep faith that higher animals’ consciousness is very similar to ours leads him to oppose the linguistic part too much not to reject all of it. He illustrates his claim with a picture depicting a human and a monkey, both with smiling faces and recurrent PDP-network consciousness. Dennett might instead draw the monkey frowning, linguistically impaired and thus unable to enjoy the thoughts comprising what we think of as the contents of our consciousness. Is this, at its heart, an unresolvable debate?
VI. The Resolution: Separation of Dennett’s ideas and acceptance of both the virtual serial machine and nonlinguistic animal consciousness
At the center of this heavily-rhetoricized dispute, then, is merely an altered version of Nagel’s old question: What is it like to be a bat? Although the superficial dispute is about the implementation of a virtual machine on top of a serial one, the debate is really about whether animals are conscious in the way that we are. It is a debate perhaps more ideological than scientific; Nagel’s question of how animal consciousness differs from our own is a debate that I believe cannot be resolved by questioning whether consciousness is PDP acting in serial emulation or just recurrent PDP in itself.
I believe that Dennett’s theory and Churchland’s antagonistic response can each be drawn from to form a synthetic theory of consciousness. While the opinions of the two with regard to animal consciousness will remain opposed, my resolution lies in the simultaneous acceptance of Dennett’s theory of virtual serial emulation and the acceptance of Churchland’s contention that the serial stream of consciousness is not necessarily most importantly linguistic in character.
§ 1. Acceptance of Dennett’s virtual serial machine in the wake of Churchland’s criticism
Dennett’s virtual serial machine is feasible regardless of whether the information that passes through the emulated serial processor is linguistic or not. Earlier, I showed that the idea of the virtual serial machine in itself (Nagel’s question aside) does not oppose Churchland’s seven criteria for consciousness (which I also accept as valid observations of the way the phenomenon works). I also explained why I do not believe that the virtual serial machine hypothesis must explain all of them in order to be true.
Churchland’s most compelling argument against the actual serial-machine idea (he spends most of his rebuttal instead arguing instead against the language idea) is that one would need to locate genuinely serial processes inside the PDP brain in order for it to behave as Dennett says it does. As mentioned above in my summary of Churchland’s position, he instead believes that recurrent PDP networks in themselves with their “continuously unfolding temporal dimension” (267) can account for what Dennett describes.
This is not such a big disagreement as it seems; Dennett’s serial processes might be emulated by way of the temporal dimension of recurrent network feedback that Churchland describes. This is particularly feasible given that a PDP network in emulation, because of its incredible initial speed advantage in calculation (see Table 1), would not suffer the speed hit that an emulation in the reverse direction would. Recurrent PDP networks’ temporal unfolding might well resemble a serial machine. We do not need to find genuinely serial processes for this to be a valid claim. I therefore grant Dennett his virtual serial machine. That machine, however, need not be skewed, as he would have it, toward language-like operations.
§ 2. Acceptance of Churchland’s theory of animal consciousness
Where Churchland is most compelling is in his assertion that language is not necessarily king when it comes to consciousness. The general human idea, or meme, that consciousness exists comes from the correlations among the linguistic communications of an innumerable number of human self-observers—the similarities between a great number of individuals’ descriptions of watching themselves do things such as make decisions, use language, and contemplate themselves and the mysteries of their own existence. So consciousness, at its core, is a subjective phenomenon and exists only in correlations among its imperfect translation into human language: There is no direct and inherently conclusive external evidence to any one self-contemplator that any other being around him, whether of his species (human, in this case) or not, has that same kind of consciousness—there is only evidence that the output behaviors of describing that consciousness are correlated.
Linguistic knowledge, then, does radically change the description of consciousness any being is capable of giving to a human—that is, from positive to negative, from ability to inability. A binary switch is flicked—any animal incapable of describing his consciousness to a linguistically-capable observer scores a “0” in the mind of the observer on the scale upon which the observer’s own idea of consciousness has been shaped. Since one’s entire conception of consciousness comes from the correlation between others’ output—their descriptions of consciousness—and one’s own subjective experience, it would be natural to assume that animals incapable of relating that subjective consciousness in words similar to our own are incapable of enjoying a similar consciousness (or, if Dennett would rather, similar contents of consciousness).
But the fact that it is a natural first reaction does not make it right. It does not necessarily follow that higher consciousness—difference in a being’s hierarchical level or category of consciousness rather than just communication capacity—can be judged from the capacity for language and animals’ consciousness thus unnecessarily dubbed “truncated”: Broad distinctions of categories of consciousness cannot be drawn from the mere information of whether an animal has linguistic capacity. We must get past our natural anthropocentric inclinations and allow science’s discovery of the similarity of recurrent PDP networks in humans and higher primates to guide us in our conception of consciousness. We must at least allow for the possibility that theirs is more like ours than it may seem. Dennett may be closer to this opinion than his language reveals; he may see the “truncating” of higher-primate linguistic-type serial-emulation consciousness as a mere feature of the “contents” of consciousness and an unimportant aspect of conscious processes as a whole. But his choice of words speaks otherwise, particularly to Churchland: whether or not he intends to, Dennett implies that deaf-mutes and other primates “suffer” in their non-Joycean-ness—an inflammatory idea.
§ 3. What we cannot know Dennett and Churchland ultimately disagree over a question that is extremely hard to resolve. As technology is developed to monitor activity in different brain areas, it looks to many observers that we are no closer to discovering whether or not higher animals sense consciousness at all like we do—it still looks much of the time as if the elusive knowledge of others’ consciousness lies in a category which we cannot know.
Dennett’s theory casts some light on the similarities between consciousness and the operation of a serial Turing machine. Churchland’s response, while unnecessarily rejecting all of that theory, does illuminate the mistake of generalizing too much on one’s own linguistic experience of consciousness. What we cannot yet know is whether consciousness will ever be isolated as a brain process, and whether these questions will be resolved in a more scientific way.
Dennett, Daniel C. (1991). Consciousness Explained. (Boston: Little, Brown and Company).
Churchland, Paul M. (1995). The Engine of Reason, the Seat of the Soul. (Cambridge, MA: MIT Press).
Chan, Sophia and Jenkins, David. “The Turing Machine.” From Brunel University Artificial Intelligence Web Pages, World-Wide Web: http://http1.brunel.ac.uk:8080/research/AI/alife/al-turin.htm.
1. 680x0 processors, such as are found in the Macintosh II and Quadra series, and Intel 80x86 processors, are based a Complex Instruction Set Computing architecture, with more specific kinds of specialized instructions but a larger number of possible instructions; the faster PowerPC processors, used in the Power Macintosh machines and most UNIX-based machines, run on Reduced Instruction Set Computing, with fewer possible instructions and all instructions more basic. The slower 680x0 and 80x86 instruction set is often emulated on PowerPC computers for backward compatibility.
2. Churchland casts Dennett as completely disregarding all non-linguistic forms of consciousness: “Human consciousness, however, also contains visual sequences, musical sequences, tactile sequences, motor sequences, visceral sequences, social sequences, and so on. A virtual serial machine has no especially promising explanatory resources for any of these things” (Churchland 269).
3. For more rhetoric one need look no further than Churchland’s comparing Dennett to a spiritualist: “One might as well propose the discredited ‘vital spirit’ as a substantive explanation for the phenomenon of Life, and then cite … the ability of DNA molecules to simulate a ‘virtual vital spirit.’
Robin Goldstein '98 (firstname.lastname@example.org) of Dunster House, hopes to become a Special Concentrator in Cognitive Neuroscience and Philosophy. In his free time, he enjoys reviewing movies for his Web page, playing ping pong, and traveling around Latin America writing for Let's Go.
Back To the Table of Contents
of Orientation Selectivity in Self-Organizing Neural Networks
The past three decades have witnessed a host of impressive findings concerning the functional architecture of the brain. One trend that much recent research suggests is that there is a high degree of specialization within the brain: different parts of the brain do different, highly specific tasks. Furthermore, most estimates place the number of cortical neurons alone at 1010, with roughly 1014 total synapses connecting them. Given this enormous number of neurons, and considering the extreme structural complexity embodied by a healthy and fully-developed mammalian brain, the project of elucidating this structure in some ways pales in comparison to that of explaining how it came to be. It is clear that the mammalian genome “cannot, in any naive sense, contain the full information necessary to describe the brain” (von der Malsburg, 1990). Yet somehow, in the process of development, the brain acquires its structure. The compelling nature of this quandary makes the notion of self-organization very appealing.
The term “self-organization” is generally used to describe the evolution of complex behavior in systems that consist solely of many very simple parts. The brain is an excellent example of such a system, for while neurons are by no means simple, many of their principle functional characteristics are believed to be relatively easy to approximate and model. In this review I will focus on research that suggests that the early stages of the mammalian visual system can be implemented solely through the self-organizing properties of neural networks.
Central to all such research is the assumption that synaptic modification occurs via some form of Hebbian learning, whereby the strength of a synapse between two neurons is increased if the activities of the neurons are correlated in time, that is, if one tends to be active at the same time that the other is. This learning rule was first suggested by Hebb in 1949 without an accompanying brain mechanism or body of evidence for its existence. Since then, the discovery of long-term potentiation (LTP) in the hippocampus has provided evidence that a form of Hebbian learning does occur in the brain, although how it is accomplished is still not clear. While skepticism about LTP remains, it is generally agreed that some form of local learning is bound to control connectivity in many cases, if only because the idea of completely central control seems inconceivable. Most efforts to show that self-organization can occur in structures resembling the brain thus make use of Hebbian learning.
Vision research has been among the most fruitful areas of neuroscience, and the basic structure of the early levels of the mammalian visual system are for the most part established. The research that I will be discussing concerns the visual system up to and including the primary visual cortex (V1). The visual pathway begins at the retina, where there are several layers of cells which project via the optic nerve to the lateral geniculate nucleus (LGN) in the thalamus, which in turn projects to layer 4C of V1. Cells in the inner layers of the retina and in the LGN are characterized by spatial-opponent receptive fields: Stimuli placed in a central, circular region of the receptive field of a cell in these regions tend to excite the cell, while stimuli placed outside the excitatory region tend to inhibit the cell. In addition, the retina and LGN are topographic: the spatial layout of their cells' receptive fields is topologically similar to the physical layout of the cells themselves. V1 cells differ from those in the LGN or retina in that many of them have orientation selective receptive fields; a cell in V1 will tend to respond strongest to a line of a particular orientation placed in its receptive field. Furthermore, as the Nobel Prize-winning research of David Hubel and Torsten Wiesel revealed, orientation selective cells are laid out in an orderly fashion. In addition to being retinotopically organized, a group of cells whose receptive fields correspond to a certain point in space will be arranged in a line such that their orientation preferences vary continuously along that dimension in cortex. Many have speculated that this layout has the favorable characteristic that each small portion of V1 has all the machinery necessary to perform the first steps in extracting information from the portion of the visual field which it represents. The research reviewed here is exciting because it suggests that many of the characteristics I have just described can be accomplished through usupervised self-organization.
A host of experiments have shown the influence of the environment on the development of the nervous system. Although many results in this literature are controversial, it is generally agreed that kittens raised in visually-altered environments developed abnormal patterns of orientation selective cells in V1 ( see Blakemore and Cooper, 1970, Hirsh and Spinelli, 1970 for the first such experiments, and Movshon and Van Sluyters, 1981 for a review). Such results suggested to some that orientation selectivity was the result of learning processes in the visual system during postnatal development. This sparked several papers that described the development of orientation selectivity with Hebbian learning in neural networks that were exposed to structured “visual” input (see von der Malsberg, 1970, for one early attempt). However, with a few exceptions, animals in most of the classic visual plasticity studies do seem to develop some sort of orientation selectivity regardless of the visual environment (see Hirsh and Spinelli for a controversial exception). The environment seems only to be able to skew the distribution of orientation selectivity, rather than completely determine it, suggesting that the visual system is intrinsically biased towards developing cells with this property. Furthermore, Hubel and Wiesel originally found that some degree of orientation selectivity exists in kitten primary visual cortex immediately after birth, before exposure to any structured visual input (Hubel and Wiesel, 1963; Movshon and Van Sluyters, 1981). More recently it has been found that orientation selectivity is fully developed at birth in monkeys and sheep (Wiesel and Hubel, 1974; Ramachandran, Clarke, and Whitteridge, 1977). Based on this it was generally accepted that although the environment may be capable of affecting the properties of cells in the early levels of the visual system, it is not their source. This left the problem of how the visual system achieved orientation selectivity unsolved.
Self Organizing Neural Nets: Ralph Linsker’s Work
A series of three papers by Ralph Linsker and the work by several others that followed them provide a possible explanation for the existence of prenatal orientation selectivity. In these papers Linsker demonstrated that spatial opponency and orientation selectivity could arise in an unsupervised feed-forward network trained with Hebbian learning on completely unstructured input. Linsker further showed that with the addition of lateral excitatory connections within the layer that developed orientation selective cells, the cells would develop their orientation preferences in a smoothly varying fashion in orientation columns similar to those in V1. These are surprisingly sophisticated properties for such a simple system to develop in the absence of structured input, and they will be the focus of this article.
Linsker’s network is a crude approximation of the visual system, with a two-dimensional input layer feeding to successive two-dimensional layers that can be interpreted as corresponding to different levels in the visual pathway. Each layer is composed of linear units and receives input from the previous layer. The inputs for a unit come from a Gaussian distribution over a local region of the previous layer. In the simulations he published, Linsker allowed the weights on connections from a given unit to take both positive and negative values. While this is biologically unrealistic, in that neurons are believed to be either excitatory or inhibitory, Linsker has reported that his results hold when the units are divided into classes which are constrained to have either only positive or only negative weights on their outgoing connections. Weights were also constrained to remain within certain positive and negative limits, which, based on physiological limitations, is biologically reasonable.
Linsker uses a version of the Hebbian learning rule:
 Ðwij = b + c(ai - d)(aj - e)
where wij is the strength of the connection from neuron j to neuron i, ai and aj are the outputs of neurons i and j, respectively, and b, c, d, and e are constants. The importance of this equation is that the weight change is proportional to the product of ai and aj. Thus the weight change is most positive if ai and aj are correlated over time, and is most negative if they are anticorrelated. In order to prevent his simulations from being prohibitively long, Linsker averaged Equ. 1 over a number of presentations to the input layer, resulting in an equation for the time-rate-of-change of a given synaptic weight which could be solved for the mature weight values of a particular layer. His averaged equation can be understood as follows: Ðwij is directly related to the correlation between the activities of the postsynaptic (labeled i) and the presynaptic (labeled j) neurons, and the activity of the postsynaptic neuron is a linear function of all its inputs. Thus the time-rate-of-change of wij is proportional to the degree to which the activity of neuron j is correlated with the other neurons that give input to neuron i. Explicitly,
 ðwij/ðt ³ p + mÝk wik+ Ýk (Qjkwik)
where p and m are constants, k indexes neurons that provide input to neuron i, and Qjk is proportional to the correlation function of the activities at neurons j and k. This function is well-defined because each layer of cells receives input only from the preceding layer, and the layers are developed one at a time. Formulating the weight change this way allowed Linsker to run his simulations more efficiently. Importantly, though, it explicitly reveals what turns out to be a crucial consequence of Hebbian learning in feedforward neural networks: change of the weights into units of a given layer is determined by the form of the correlation in the activities of the units of the previous layer.
The weights to layer B are developed based on the activity in the input layer A, and once the weights in layer B reach stable values, the activity in layer B, propagated from layer A, is used to develop the weights in layer C, and so on. Because the network is developed in this manner, each layer’s weights are a function only of the pattern of activity in the previous layer. Also important is the property of Equation 2 that either all or all but one of the inputs to any given cell will saturate to their limiting values. This property falls directly out of the equation; see Linsker 1986a for the proof.
When the activity in layer A is uncorrelated, the correlation function is close to zero for most pairs of neurons. Thus ðwij/ðt is close to constant for each connection from layer A to a cell in layer B. For appropriately chosen constants, this results in the saturation of all the weights from layer A to layer B at the upper positive limit. Because each cell in layer B receives input from a Gaussian distribution of cells in layer A, a given cell in layer B will, at maturity, function to compute a spatial average of a local region of activity in layer A. And because the receptive fields of nearby cells in layer B overlap, cells that are close together will include many of the same layer A neurons in their average, and will thus have correlated activities. Linsker shows that this correlation function is a Gaussian (whose peak is at the cell in question - obviously, the highest correlation is between a cell and itself - and falls off for cells whose receptive fields are far away).
Once the connections from layer A to layer B were mature, Linsker developed layer C using the Gaussian correlation function of layer B. He found that the morphology of layer C cells fell into a series of regimes depending of the values of p and m. Cells developed to have either i) all excitatory inputs, ii) all inhibitory inputs, iii) “ON-center” circularly symmetric opponent connections: a core of excitatory connections surrounded by a ring of inhibitory connections, iv) “OFF-center” circularly symmetric opponent connections: a core of inhibitory connections surrounded by a ring of excitatory connections, and v) spatially divided inputs such that approximately one side of the receptive field was composed of excitatory and the other half of inhibitory connections. I will focus on case (iii), where layer C develops center-surround receptive fields strikingly similar to those found in the retina and lateral geniculate nucleus, the two stages in the visual pathway that lead to primary visual cortex.
The spatial opponent receptive fields develop because of the presence of the Gaussian correlation function of layer B in Equation 2. For negative m and positive p, the following occurs during the maturation process: positive p values cause all weights to initially increase such that the wik contribution to ðwij/ðt is positive. Then, since the sum in Equation 2 is over the synapses to the postsynaptic neuron i in layer C, which have a Gaussian distribution in layer B, ðwij/ðt has a greater contribution from the correlations that the activity of neuron j has with neurons in the central region of neuron i’s receptive field, and a smaller contribution from the correlations that it has with peripheral neurons, simply because there are fewer of them. But since the correlation function Q is a Gaussian for layer B, a given layer B neuron is most correlated to neurons nearby, and less with neurons farther away. Thus if neuron j is in the central region of neuron i’s receptive field, its connection to neuron i will be increased more than will a peripheral neuron’s. Although the correlation function between a given neuron and the neurons surrounding it are identical for all neurons, the Gaussian distribution of afferent inputs to postsynaptic cells causes the high portion of a peripheral neuron’s correlation function to be sampled less frequently than is the high portion of a central neuron.
As a comparison, consider what would happen if Q were a constant function. Then each neuron in layer B would be equally correlated with all the other neurons in layer B. Were this the case, the Gaussian distribution of afferent inputs to the neuron i in layer C would not have the effect that it does, since the Q contribution to the sum in Equation 2 would be independent of where it was sampled.
Thus ðwij/ðt is larger for neurons in the center of neuron j’s input distribution than for neurons in the periphery. Furthermore, the absolute magnitude of ðwij/ðt can be shifted by varying the constant m. For sufficiently large negative m, ðwij/ðt is negative for peripheral neurons, and positive for central ones. Since Equation 2 causes all of the weights to a given neuron to saturate, this causes the connections from layer B neurons in the central region of a layer C neuron’s receptive field to mature to the maximum excitatory value, while those in the periphery saturate to the inhibitory limit.
To summarize, layer B computes the local spatial average of the random activity in layer A, because of the Gaussian distribution of inputs. This makes the correlation function for layer B Gaussian, and combined with the Gaussian synaptic distribution, for appropriate values of the constants in Equation 2, spatial opponency results. Thus there are two crucial dependencies: the proper values of p and m, and the Gaussian synaptic distribution. These will be discussed later.
Development of orientation selective cells proceeds in similar fashion. Parameters are chosen such that layer C develops into “ON-center” cells. Then the correlation function is computed for layer C. Since layer C is, up to small deviations, uniform, the correlation function (which in general form is a function of two variables, corresponding to any two cells in a layer) is effectively a function of one variable. That is, because of the layers uniformity, the correlations of a given cell Z with another cell Y depends only on the distance between cell Z and cell Y, and not on the actual position of the two cells in the layer. Linsker thus idealizes the correlation function Qij to the function Q(s), where s is the distance between two cells.
For a layer of spatial-opponent cells, Q(s) has a “Mexican-hat form”: Q is positive for small s, when cells are close to one another and their excitatory center cores overlap, negative for intermediate s, when the inhibitory surround of one cell overlaps the excitatory core of another, and zero for large s, when the cells are far apart and completely uncorrelated. This correlation function is then used to develop layer D. Linsker reported that there were a range of morphological options for layer D, including the formation of orientation selective cells, but that the regime of orientation selectivity was not very stable. One of the other regimes for layer D was spatial opponency, with the cells in layer D differing from those in layer C in that the “Mexican-hat” correlation function QD(s) for layer D had deeper minima than did QC(s). Linsker showed that “Mexican-hat” correlation functions with deeper minima produced in the following layer a more stable regime of orientation selectivity. He thus set the parameters of layer D such that it developed spatial opponent cells. He did this for layers D through F, with each successive layer having a Mexican-hat correlation function with deeper minima than the previous layer’s function.
Linsker simply presents the results of his simulations, and does not discuss why the correlation functions develop deeper minima. However, some of the data he presents suggests that this deepening occurs because the cells develop receptive fields with larger inhibitory surrounds. Linsker gives the minimal values and the location of the zero-crossing for the Mexican-hat correlation functions of layers, and in addition to minimal values decreasing with successive layers, the zero-crossing appears to decrease. He also mentions that the minimal values of the correlation function vary inversely with the average sum Ýjwij of all the weights on the afferent connections to a unit i. In other words, there is a direct relationship between the relative proportion of inhibitory connections and the depth of the Mexican-hat minima. This is consistent with my observation that the zero-crossing decreases as the minima depth decreases, for as the zero-crossing decreases, the distance that two cells can be apart and still have positively correlated values decreases. But this happens for center-surround cells only if the center excitatory cores shrinks and the inhibitory surrounds grows. Furthermore, layers of cells with core and surround that are of approximately the same thickness will, logically, produce the deepest minima for their correlation function, because the core and surround can then completely overlap, causing the greatest anticorrelation. Since layer C cells have excitatory cores that are large compared to their surrounds, increasing the size of the inhibitory region in successive layers moves the two regions towards being of equal size, and acts to increase the maximum anticorrelation. Thus although Linsker doesn’t actually discuss why the minima of the Mexican-hat function deepen with successive layers, it seems likely that the minima depth increases because the inhibitory surround of the cells’ receptive fields grows in size.
Why do the inhibitory surrounds increase in area for progressive layers of cells? Again, Linsker doesn’t discuss this, but the answer can be found by considering how the formation of center-surround cells from a Mexican-hat correlation function differs from the formation from the Gaussian correlation function of layer B. The main difference between the two functions is that the Mexican-hat has a smaller average value than does a Gaussian. Recall from my discussion of the center-surround cell formation process that the inhibitory regions form because the sum Ýk (Qjkwik) is smaller for peripheral connections than it is for central ones, and thus at some point in a cell’s receptive field, ðwij/ðt drops below zero, which sends wij to the inhibitory limit. If the average value of Q decreases, Ýk (Qjkwik) will decrease for all connections wik, and the threshold dividing excitatory and inhibitory connections will decrease, creating cells with smaller excitatory cores and larger inhibitory surrounds. Thus because layer C has a Mexican-hat correlation function, the cells in layer D develop smaller excitatory cores and larger inhibitory surrounds. But as discussed in the previous paragraph, increasing the size of the inhibitory surround in layer D causes QD to have deeper minima than QC. This in turn causes layer E to have larger inhibitory surrounds (assuming that the parameters for layer E are set such that it develops center-surround cells), which causes layer E’s Mexican-hat correlation function to be deeper than that of layer D, and so on.
Linsker reported that with a sufficiently deep correlation function for a layer of cells, the succeeding layer will develop orientation selective cells that are stable with respect to random changes in the initial weights. In the network he describes, he developed four layers of center-surround cells to obtain a suitably deep function.1 He describes the results obtained by varying two parameters: the average sum Ýjwij, which we will call g, and RG, where RG is the radius of the Gaussian distribution of afferent connections to a cell in layer G. Varying g is equivalent to varying the ratio of m and p, the two constants in Equation 2. Linsker found that g and RG were the main parameters governing the development of orientation.
There were several broad classes of development characteristics, each corresponding to a range of RG with respect to smin, the location of the minimum value of the correlation function for layer F. I will discuss two of these classes. First, for RG much less than smin, layer G expressed morphology that was quite similar to that of layer C, which was discussed earlier.2 Though Linsker does not discuss this, this class probably exists because when the radius inside which a cell draws its input is much smaller than the minimum of the Mexican-hat function, the cell can only “see” the central portion of the function. Equation 2, which is where the influence of the correlation function comes into play, sums over the correlation function only for s-values within the radius of the Gaussian input distribution, simply because the sum is over the cell’s inputs. So if the radius of the Gaussian is much smaller than smin, Equation 2 includes values of the correlation function for small s only, and over that restricted range, the Mexican-hat function resembles the Gaussian correlation function of layer C.
For RG close to smin, decreasing g results in a more interesting range of morphologies. For high g, the cells are obviously all-excitatory. As g is lowered, isolated regions of inhibitory connections appear. As g is decreased further, the G cells become bilobed: they have a central strip of excitatory connections, with parallel inhibitory bands of connections on either side. Such cells, clearly, are orientation selective. The orientation that a cell develops is random. As g is further decreased, the inhibitory side bands extend to enclose the excitatory region, thus forming a center-surround cell.
What causes orientation selectivity? Linsker answered this by referring to an energy function he created which has the property (common to other energy functions) of decreasing with every weight change. The weight development process is thus a process of gradient descent, and this sheds some light on the orientation formation. In order to avoid introducing new equations and variables, and in the interest of explaining the phenomenon in as biologically-grounded terms as is possible, I will attempt an explanation based on the weight-change equation instead of referring to Linsker’s energy function. Recall the time-rate-of-change of a weight that was described by Equation 2:
 ðwij/ðt ³ p + mÝk wik+ Ýk (Qjkwik).
The first two terms on the right hand side of Equation 2 are the same for all connections. The term Ýk (Qjkwik) is not, however. It will be similar for connections from cells that are nearby in layer F, because the correlation function will tend to multiply each weight by a similar number. For connections from cells that are further apart in layer F, Ýk (Qjkwik)will tend to be different, because the correlation function for one connection will have a positive value when the correlation function of a second connection has a negative value. Thus there is an overall pressure within the development process for the connection from a cell in layer F to have a value equal to that of the connections from nearby cells, and different from those of connections from cells that are further away. However, from topological considerations this obviously cannot be realized in full. What happens instead is that contiguous regions of excitation or inhibition are formed which satisfy the pressure imposed by the correlation function as best as can be done. The stripes that produce orientation selectivity are one possible arrangement, and, as Linsker comments, they probably minimize the number of pairs of nearby cells which have different weight values, although no proof of this is given.
Admittedly, this explanation of how orientation selectivity emerges is far from rigorous. However, Linsker’s original papers sparked a great deal of interest, and several papers have been written that analyze his results. Linsker’s results are nicely formalized by MacKay and Miller, who show that if the correlation function Q is turned into a matrix (a minor change in definition), the principle eigenvectors of this matrix resemble the weight ensembles in the various regimes of cell morphology that Linsker observed (MacKay and Miller, 1990) . The weight-change equation (Equation 2) can be viewed as multiplying a vector consisting of the weights of the connections to a cell by the correlation matrix of the preceding layer. The eigenvectors of a matrix M are the elements that are changed only by scalar multiplication when multiplied by M, and the principle eigenvector is the eigenvector that grows in length the most under multiplication by M. Thus it makes sense that the final weight configurations are the eigenvectors of the correlation matrix, because such configurations are the only ones that will maintain their form over development. MacKay and Miller find that the principle eigenvectors of Gaussian covariance matrices tend to be of center-surround form, and that Mexican hat covariance matrices can have principle eigenvectors that are either center-surround or bilobed (having a positive stripe next to a negative stripe - i.e., morphologically similar to orientation selective receptive fields). Although MacKay and Miller do not state this explicitly, Miller’s comments in another article (Miller 1990) indicate that the bilobed morphology (which would produce orientation selectivity) is the principle eigenvector when the Mexican hat covariance matrix has a narrow positive center (lower zero-crossings). This is in accord with Linsker’s results, in that Linsker found that oriented bilobed cells were stable only in the higher levels of his simulations, when, as discussed above, the Mexican hat covariance function is of just this form.
The work of MacKay and Miller is additionally important because it reveals that cell morphologies in a layer of a feed-forward Hebbian networks are determined by the correlation matrix of activities in the previous layer. This provides an avenue for exploring self-organization, because once the activity patterns of a layer of cells is known, the eigenvectors of the correlation matrix can be computed, and compared to the receptive fields of cells in the next layer. If simple self-organization occurs, one might expect some match between the computed eigenvectors and the measured receptive fields.3
In the network described in the previous section, orientation preferences arose randomly from cell to cell, with no correlation between neighboring cells. It is a well-known feature of mammalian primary visual cortex that the orientation preference of cells varies in a continuous fashion over the cortical surface while remaining constant over displacements perpendicular to the surface (Hubel and Wiesel, 1963, 1974). The regularity of orientation distribution prompted Hubel and Wiesel to suggest that the basic unit of organization in V1 is the “hypercolumn,” a group of columns of cells whose receptive fields represent the same point in visual space, with each column of cells having a different orientation and ocularity preference, thus providing the neural machinery to analyze a single point of the visual field (Mason and Kandel, 1991).
Linsker found that by adding excitatory lateral connections between the cells in layer G before the connections from layer F were developed, the orientation that a cell developed was no longer random, but that cells of a similar orientation organize into band-like regions. Due to computational constraints, he did not explore the full parameter space to see how closely the bands could come to resembling the patterns that are found in various species. Fortunately, his initial result has been pursued by other researchers, who have confirmed their robustness (Miller, 1992) . Furthermore, others have shown that another primary feature of V1, ocular dominance columns, can also arise from self-organization as a result of competition between the activity patterns of the two eyes (Miller, Keller, and Stryker, 1989). These results suggests that the principal features of V1 could result from very simple self-organization.
There are several issues pertaining to Linsker’s simulations that deserve mention. First, consider the assumptions implicit in Linsker’s network. His model depends crucially on the afferent connections to each cell in a layer of his network having a Gaussian distribution about a point in the previous layer. The retinotopy of the early levels of the visual system is well established; the important question with regard to Linsker’s model is whether retinotopy precedes the development of cell properties. When and how retinotopy arises in biological systems is not yet known, but there are several methods of developing retinotopy with unsupervised neural networks (see von der Malsburg 1990, and Hertz, Krogh, and Palmer 1991 for reviews). In any case, this assumption is by far the most reasonable of those implicit in Linsker’s model.
A second element of the model that is less biologically reasonable is the uniformity of the cells - each cell can have both excitatory and inhibitory connections. In particular, each connection is treated identically by the development equations. Any given connection can assume the full range of excitatory and inhibitory values. Since it is usually agreed that neurons are either glutamatergic (excitatory) or GABAergic (inhibitory), this feature of the model is troubling. Linsker did this simply to speed the simulations. Since the development equation sends all the connections in an inhibitory region to the inhibitory limit and all the connections in an excitatory region to the excitatory limit, were there an even distribution of excitatory and inhibitory connections throughout a cell’s transfer function the excitatory connections that ended up lying in the inhibitory region would be sent to zero, as would the inhibitory connections lying in the excitatory region. This basically wastes half the connections, which is why Linsker simplified the model by making each connection nonspecific.
More significant is the fact that retino-geniculate and geniculate-cortical projections are believed to be purely excitatory, a fact not taken into account by Linsker’s model. However, Ken Miller has conducted simulations with a far more biologically plausible model, where the layer corresponding to the LGN consists of two populations of center-surround cells, ON and OFF, each of which make only excitatory connections to cells in the layer representing V1 (Miller 1989). Miller achieves results strikingly similar to those of Linsker. Miller’s model achieves orientation selectivity via a mechanism inspired by his models of ocular dominance formation (Miller, 1992). Experiments depriving animals of the input from one eye suggest that in addition to there being a Hebbian component to synapse formation, there are also competitive influences: given two groups of correlated inputs, the strongest activated group may “win out” and develop connections of the greatest strength (Guillery, 1972). Making this assumption, Miller shows that when there are separate populations of ON- and OFF-center cells, and each cell in the next layer (representing V1) initially gets input from the same local retinotopic area within each population, bilobed receptive fields develop. This happens in his model because at small retinotopic distances, cells of the same type (ON or OFF) are correlated, while at larger distance, cells of opposite type are correlated. Combined with realistic interactions among his “cortical” cells, Miller finds that this model develops orientation selective cells whose arrangement within the layer is quite similar to cortical orientation maps. While this model seems vastly different from Linsker, the importance of correlation functions remains the crucial factor, and Miller’s work thus suggests that in spite of the implausibility of Linsker’s model, his results may reflect a property of neural networks that could also occur in the brain.
Of key importance, then, to the biological relevance of these models, is the covariance functions of layer activity in the brain. Limited work has been done on this, but there is some indication that the appropriate correlations are present in the visual system (Miller, 1991)
A final feature of the model that is not explicitly grounded in biology is the layer-by-layer development process. Not enough is known about how multilayer neuronal systems develop in the brain to evaluate this facet of the model. Furthermore, similar weight patterns may result even if all layers are developed simultaneously
Ever since the discovery of orientation selectivity in V1, much research has been devoted to uncovering the mechanisms which underlie it. While there is not complete agreement as to these mechanisms, a host of studies are relevant to evaluating the extent to which Linsker’s results actually explain orientation selectivity in V1. Here I briefly describe a few of them. Some of the most well-known studies have been conducted by Sillito, who showed that orientation selectivity was abolished if inhibition is blocked with the GABA antagonist bicuculline, suggesting that orientation selective cells receive both excitatory and inhibitory inputs, both of which are important for their function (Sillito, 1979). Because LGN cells are known to make only excitatory synapses with cells in V1, this was interpreted as showing that orientation selectivity is solely a product of cortical interactions. However, an alternative interpretation is that the application of bicuculline to an area of cortex leaves only excitatory connections between cortical cells. (thanks are due to Ken Miller for pointing this out to me) This could account for Sillito’s result, in that the cells excited by lines of a certain orientation would then excite other nearby cortical cells, causing them to be active in spite of having orientation-tuned inputs from the LGN. The results of Nelson et al. support this view. They observed that orientation selectivity remains fully present in cat V1 cells when only the neuron being recorded from has its inhibitory inputs intracellularly blocked (Nelson et al. 1994). This suggests that excitatory inputs (which could presumably be coming from the LGN in agreement with the self-organizing models) are sufficient to generate orientation selectivity, and that intracortical inhibitory inputs mediate, if anything, an indirect effect through other neurons. David Ferster’s research provides additional evidence that patterns of activity in LGN afferents do indeed play the crucial role in orientation selectivity that was originally proposed by Hubel and Wiesel. Ferster recorded intracellularly from cells in V1 while flashing oriented stimuli within a cell’s receptive field, and by hyperpolarizing the cortical cell he was recording from to the IPSP reversal potential (done by injecting current through the recording electrode), Ferster enhanced the cell’s EPSPs while suppressing its IPSPs. He found that the EPSPs of cortical cells are orientation tuned, in that rotating the stimulus away from the optimal orientation produced a decrease in the cell’s EPSP (Ferster, 1986). While this demonstrates that excitatory connections are sufficient to produce orientation selectivity, the excitation could conceivably be cortical in origin, since there was no way for Ferster to determine its source. However, given the substantial excitatory input that V1 receives from the LGN, it seems likely that Ferster was observing LGN produced EPSPs, in which case the self-organizing models would be entirely consistent. While excitation is thus sufficient for orientation-selectivity, a role for inhibition is suggested by Hata et al., who find through a cross-correlation analysis evidence for inhibitory interactions between two simultaneously recorded neurons with slightly different orientation preferences, perhaps implicating inhibitory interneurons in the formation of orientation columns or in the fine-tuning of responses. (Hata et al., 1988). In summary, while there is evidence that both inhibitory and excitatory inputs to cortex play a role in orientation selectivity, the existing evidence for the mechanisms of orientation selectivity is consistent with the assumptions that are, broadly speaking, made by the self-organizing models.
Implications of the Self-Organization Research
Linsker’s work is intriguing because cells with spatial opponency and
orientation selectivity, properties that have been thought to be of tremendous
computational significance, seem to arise completely automatically in a
situation where their functional utility has no bearing on their development.
Given this observation, several points immediately come to mind. First,
Linsker’s work seems to suggest that orientation selective cells could
arise in any network that has sufficiently deep correlation functions in
a layer of spatial-opponent cells. Because he trained his network on completely
unstructured input, there is nothing inherently “visual” about his result.
Based on his results, one might expect to find cells with oriented receptive-fields
in all sensory systems, because sensory systems have roughly similar initial
structure to the networks Linsker used: they have a topographic input layer,
project to the thalamus, and then project to a primary sensory area. Interestingly,
orientation selective neurons analogous to those in V1 have been reported
in the somatosensory cortex, supporting this (Hyvarinen and Poranen, 1978,
cited in Sur,Garraghty, and Roe, 1988).
Relevant to this is a remarkable study by Sur, Garraghty, and Roe that “rerouted” the retinal projections of ferret visual systems at birth to the ferrets’ medial geniculate nucleus, the auditory portion of the thalamus (Sur,Garraghty, and Roe, 1988). This was accomplished by ablating V1, V2, and the superior colliculus of the ferrets’ visual systems as well as the inferior colliculus (the area that the MGN gets most of its projections from). Doing this causes the retina to project to the MGN and hence to auditory cortex. They found that the MGN developed visually responsive cells that were retinotopically organized, and that some of cells had spatially opponent receptive fields. The primary auditory cortex in such ferrets (whose connections from the MGN were not altered by the operation) also developed visually responsive cells, about twenty percent of which were orientation selective. It may have been the case that only those retinal cells that had not already established stable connections with the LGN were available to postoperatively project to the MGN, and thus there may have been comparatively few projections from the retina to the MGN, which might account for the low percentage of orientation selective cells. In any case, this study corroborates one point that Linsker’s work suggests: namely, that orientation selectivity can arise in any layer of cells that receives input from a layer of spatial-opponent cells.
A final point that should be mentioned is that Linsker’s network suggests that mammalian visual systems may have evolved several stages of center-surround cells in order to deepen the minima of the correlation functions of the layer preceding V1 such that it could develop orientation selective cells. The presence of center-surround cells in the LGN when there are spatial opponent cells in the retina does not currently have an explanation. Although the four layers of the network in his paper were contrived to be directly analogous to the levels of the early visual system, Linsker’s analysis gives a reason why the redundant morphology may actually serve a functional role.
In conclusion, there are a variety of reasons to think that self-organization is one way that the brain’s structure is achieved. We have considered the phenomenon of orientation selectivity in mammalian visual cortex. Linsker’s self-organizing neural network develops orientation selective cells automatically and without any environmental cues. Although there are some inconsistencies with his results and experimental work on the brain, that his network develops a computationally useful property completely on its own is remarkable. More biologically realistic models that achieve similar results, such as those by Ken Miller, make a strong case for the presence of similar principles of organization in the brain, and there is reason to believe that these principles may be behind other properties of neurons in the visual system. Regardless of whether orientation selectivity arises in the brain through mechanisms analogous to those described here, one of the most important insights to come out of the self-organization models is the importance of the covariance function of activities across a layer of cells whenever any type of Hebbian learning occurs in a system. This principle alone has great potential for helping to unearth the origins of organizaiton in the brain. Indeed, if self-organization is a viable means of development, as it seems to be, then it is probably used with frequency in the brain, and self-organizing models will be of great use in identifying this.
Blakemore, C. and G.F. Cooper. (1970). Development of the brain depends on the visual environment. Nature. 228: 477-478.
Brown, T.H. and Chattarji, S. (1995). Hebbian synaptic plasticity. In The Handbook of Brain Theory and Neural Networks.. Arbib, ed. (Cambridge: MIT Press).
Cruetzfeldt, O.D., Kuhnt, U. and Benevento, L.A. (1974). An intracellular analysis of visual cortical neurons to moving stimuli. Experimental Brain Research 21: 251-274.
Ferster, D. (1986). Orientation selectivity of synaptic potentials in neurons of cat primary visual cortex. Journal of Neuroscience 6: 1284-1301.
Ferster, D. (1987). Origin of orientation-selective EPSPs in simple cells of cat visual cortex. Journal of Neuroscience 7: 1780-1791.
Ferster, D. (1988). Spatially opponent excitation and inhibition in simple cells of the cat visual cortex. Journal of Neuroscience 8: 1172-1180.
Ganz, L. and Felder, R. (1984). Mechanism of directional selectivity in simple neurons of the cat’s visual cortex analyzed with stationary flash sequences. Journal of Neurophysiology 51: 294-324.
Guillery, R.W. (1972). Binocular competition in the control of geniculate cell growth. Journal of Comparative Neurology 146:407-420.
Hata, Y. et al. (1988). Inhibition contributes to orientation selectivity in visual cortex of cat. Nature 335: 815-817.
Hertz, J., Krogh, A., and Palmer, R.G. (1991). Introduction to the Theory of Neural Computation. (Redwood City: Addison-Wesley Publishing Company).
Hirsh, H.V.B. and Spinelli, D.N. (1970). Visual experience modifies distribution of horizontally and vertically oriented receptive fields in cats. Science 168: 869-871.
Hubel, D.H. and Wiesel, T.N. (1963). Shape and arrangement of columns in cat’s striate cortex. Journal of Physiology. 165: 559-568.
Hubel, D.H. and Wiesel, T.N. (1974). Sequence regularity and geometry or orientation columns in the monkey striate cortex. J ournal of Comparative Neurology. 195: 267-294.
Kammen, D.M. and Yuille, A.L. (1988). Spontaneous symmetry-breaking energy functions and the emergence of orientation selective cortical cells. Biological Cybernetics 59: 23-31.
Koch, C. and Poggio, T. (1985). The synaptic veto mechanism: does it underlie direction and orientation selectivity in the visual cortex? In Models of the Visual Cortex. Rose and Dobson, eds. (New York: John Wiley and Sons Ltd).
Linsker, Ralph. (1986). From basic network principles to neural architecture: emergence of spatial-opponent cells. Proceedings of the National Academy of Science USA., 7508-7512.
Linsker, Ralph. (1986). From basic network principles to neural architecture: emergence of orientation-selective cells. Proceedings of the National Academy of Science USA., 8390-8394.
Linsker, Ralph. (1986). From basic network principles to neural architecture: emergence of orientation columns. Proceedings of the National Academy of Science USA., 8779-8783.
MacKay, D.J.C. and Miller, K.D. (1990). Analysis of Linsker’s simulations of Hebbian rules. Neural Computation 2: 173-187.
MacKay, D.J.C. and Miller, K.D. (1990). Analysis of Linsker’s application of Hebbian rules to linear networks. Network 1: 257-297.
von der Malsberg, Christoph. (1973). Self-organization of orientation sensitive cells in the striate cortex. Biological Cybernetics 14: 85-100.
von der Malsberg, Christoph. (1979). Development of ocularity domains and growth behavior of axon terminals. Biological Cybernetics 32: 85-100.
von der Malsberg, C. and Cowan, J.D. (1982). Outline of a theory for the ontogenesis of iso-orientation domains in visual cortex. Biological Cybernetics 45: 49-56.
von der Malsberg, Christoph. (1990). Network self-organization. In An Introduction to Neural and Electronic Networks . Zornetzer, Davis, and Lau, eds. (San Diego: Academic Press Inc.), pp. 421-432.
Mason, C. and Kandel, E.R. (1991). Central visual pathways. In Principles of Neural Science. Kandel, Schwartz, and Jessell, eds. (Norwalk: Appleton and Lange), 420-439.
Miller, K.D. (1989). Orientation-selective cells can emerge from a hebbian mechanism through interactions between on- and off-center inputs. Society for Neuroscience Abstracts 15: 794.
Miller, K.D. (1990). Correlation-Based Models of Neural Development. In Neuroscience and Connectionist Theory. Gluck and Rumelhart, eds. (Hillsdale, NJ: Lawrence Erlbaum Associates), pp. 267-353.
Miller, K.D. (1992). Models of Actvity-Dependent Neural Development. The Neurosciences 4: 61-73.
Miller, K.D. (1989). Development of Orientation Columns Via Competition Between ON- and OFF-center inputs.NeuroReport 3: 73-76.
Miller, K.D. (1994). A model for the development of simple cell receptive fields and the ordered arrangement of orientation columns through activity-dependent competition between ON- and OFF-center inputs. Journal of Neuroscience 14: 409-441.
Movshon, J.A. and Van Sluyters, R.C. (1981). Visual neural development. Annual Review of Psychology. 32: 477-522.
Nelson, S., Toth, L., Sheth, B., and Sur, M. (1994). Orientation selectivity of cortical neurons during intracellular blockade of inhibition. Science 265: 774-777.
Roe, A.W., Pallas, S.L., Hahm, J., and Sur, M. (1990). A map of visual space induced in primary auditory cortex. Science 250: 818-820.
Roe, A.W., Pallas, S.L., Kwon, Y.H., and Sur, M. (1992). Visual projections routed to the auditory pathway in ferrets: receptive fields of visual neurons in primary auditory cortex. The Journal of Neuroscience 12: 3651-3664.
Rose, D. (1995). A portrait of the brain. In The Artful Eye. Gregory, Harris, Heard, and Rose, eds. (New York: Oxford University Press), pp.28-51.
Sillito, A.M. (1979). Inhibitory mechanisms influencing complex cell orientation selectivity and their modification at high resting discharge levels. Journal of Physiology 289: 33-53.
Sillito, A.M. (1980). A re-evaluation of the mechanism underlying simple cell orientation selectivity. Brain Research 194: 517-520.
Srinivasan, M.V., Zhang, S.W., and Rolfe, B. (1993). Is pattern vision in insects mediated by “cortical” processing?. Nature 362: 539-540.
Sur, M., Garraghty, P.E., and Roe, A. (1988). Experimentally induced visual projections into auditory thalamus and cortex. Science 242: 1437-1441.
Worgotter, F., Niebur, E., and Koch, C. (1992). Generation of direction selectivity by isotropic intracortical connections. Neural Computation 4: 332-340.
Yuille, A.L., Kammen, D.M., and Cohen, D.S. (1989). Quadrature and the Development of Orientation Selective Cortical Cells by Hebb rules. Biological Cybernetics 61: 183-194.
1 As I have mentioned and explained, Linsker notes that the Mexican-hat minima depth can also be increased by lowering the average sum Ýjwij of all the weights of the afferent connections to a unit i.
2 Recall that the development of layer C depended primarily on the constants m and p. These are contained in the parameter g which was varied in these simulations.
3 Clearly, strict feedforward networks do not exist in the brain. (Indeed, only 10 percent of the inputs to the LGN come from the retina!) Linsker’s work suggests, however, that many of the receptive field properties of cells may arise from feedforward interactions.
Josh McDermott '98 (email@example.com) is a Special Concentrator in Cognitive Science living in Leverett House. He has broad interests in vision, computational and cognitive neuroscience, and the neural basis of and philosophical issues surrounding consciousness.
Back To the Table of Contents
Harvard Undergraduate Society for Neuroscience