
Twenty Years Before I Discovered my Aphasia, I researched an Essay on Global Aphasia
On December 8, 2008, I was unconscious. I lay there in the ICU in a medical coma. The brain surgeon was completely exhausted He wept to my parents, “He had a fist-sized blood clot in his left hemisphere of his brain. What we’re seeing is a catastrophic massive stroke”. He was carefully pointing out that my parents’ son was a vegetable.
Plus, my brain surgeon didn’t know that I had an aphasia. That would come out in ten days where my coma finally started to ebb. He must have known that a “fish-sized blood clot” would erase my spoken & written sentences and my ability to use my right-handed body.
Years later, the hosts of doctors said, “….and no thread to link up your primary motor cortex and primary somatosensory cortex was neurons so, your right body is paralyzing…”
I denied that. Completely.
Years ago, in 1988, I went to University of Pennsylvania. The woman at the registration administration said, “the Annenberg School for Communication has gotten fantastic programs on 16mm film-camera that you must take!”
The Annenberg School for Communication’s administrators cancelled this 16mm film-camera seminar.
Still, I took courses there. And, one of the courses was “Comm 164: Sounds”.
I handed in my essay on Putting the ‘Sound’ Back into Sound Communication: An Overview of Non-lexical Comprehension of Speech as Displayed by Patients with Global Aphasia, to my teacher’s assistance on……December 8th,1988.
Twenty years had passed. I can still remember the library, and the textured of the dark wooded bookshelves and my trembling emotion upon seeing the “aphasia” note-card.
My essay about aphasia.
Chuang Tzu wrote that, “fishing baskets are used to catch fish; but when the fish are got, the men forget the baskets…words are employed to convey ideas; but when the ideas are grasped, men forget the words” (Rheingold, 1988: pp.147) As someone who has grown up with English as a first language, and having read rather than heard Chuang Tzu’s thought, you have unfortunately fallen prey to one of the pitfalls of the English language. Although Chuang Tzu neglected to explain what makes up the fibers of language that weaves these nets, as someone whose vision of language had been shaped by the Chinese language he would certainly have included tone, juncture and stress into his idea of language. Would a speaker of English agree with my contention, or would he maintain that the “net” is derived purely from lexical comprehension. Since, in literate societies, speech can be translated into symbols which signify phonemes, I believe that there is a tendency in our culture to believe that the written word is a complete representation of speech. Furthermore, given the preponderance of lex and syntax in writing, due to the inherent arbitrariness of the written word, I feel that English speakers, through cultural bias, belittle the importance of tonal and associative meaning in the comprehension of speech.
The attitude that the “word” (referring to the lexical value of arbitrarily grouped phonemes) is the meaning of speech, runs converse to experience. Given our ability to discern lies, to be taken in by the sincerely intoned jabberwocky of politicians and to nod in agreement with the stand-up Comedian’s adage that “it’s the way you tell it,” it seems an innate, although perhaps unconscious, belief in English that tone et al is important, although secondary to “real language.”
By studying the nature of human speech and the means by which it is processed by individuals suffering from language comprehension disfunction, this paper will demonstrate the importance of tonal information in the understanding of loose, or natural speech. Until now all the components of speech (intonation, juncture and stress,) which are neither lexical nor syntactic have been dismissed under the heading of “para-language.” This paper will show that the implication, that “para-language” is somehow a redundant appendage to “real” language, is in fact misleading, and incorrect.
The manner whereby I intend to demonstrate the important role that non-lexical elements play in speech comprehension, is first by establishing how English speaking people group sound. By giving examples of communication problems in patients who have had damage to specific, speech related areas of their brains, it will become clear that not only are sounds compartmentalized according to a rigid hierarchy (Brown, 1972: pp.140), but that lexical/syntactical and tonal information are processed in a parallel, rather than serial manner (Zurif, 1978: pp.166) One part of the purpose of this paper is to sensitize the reader to the realization that the brain is not simply a recording device for sound. Rather, the brain is a machine capable of forming associations among specific groupings of perceived sound and high cognitive processes taking place in the “concept centers” of the brain, in a process that is known as the transcortical language function (Eggert, 1977: pp.33.)
In order to neutralize the term “para-language” so that the subject is not discussed in a prejudice manner, I will adopt the term that Henry Head used to refer to the tonal information encoded in speech: “feeling-tone” (Sacks,1970: pp.78) Although Head used the term to include facial grimaces, posture and hand movements, this paper will focus, for the most part, on the tonal elements of feeling-tone.
When Marler wrote about teaching baby birds to sing, he noted that a bird cannot be taught to sign a song that does not belong to his species. When describing this phenomenon in human terms, Marler used the example of a human baby who is innately able to discern human sounds from background noise. (Marler,1973) Conversely, Benton noted that developmental aphasic children sometimes “turned-off” human sounds but responded to animal or environmental sounds. In the absence of any physiological damage, these children were also prone to respond to either soft or loud sounds to the exclusion of the other. (Goodglass,1983: pp.28) Although it might seem obvious that speech is grouped purely on the basis of associative ties, this is not always true. Whereas, on a lexical level, Goldstein (1943) noted that schizophrenics often replace words such as “mouth” with “kiss” or “bird” with “the song” there is not always as strong a tie between semantic association and speech sounds. (Brown, 1972: pp.178) For example, Spreen et al (1965) described individuals with receptive amusia who were able to identify onomatopoeic sounds such as “bow-wow” without being able to identify the actual animal noises. (Brown, 1972: pp.140) Clearly, as this case demonstrates, there was no ability to associate the semantic value of “bow-wow” (a weakly iconic representation of the sound that dogs produce) to the sound that had acquired the meaning of “the sound that a dog makes.” Undamaged brains form associative ties between both “bow-wow” and the sound of a dog barking with the response, “that is the sound of a dog barking” (assuming that only a marginally percent of the patients with receptive amusia had not heard a dog actually barking prior to their illness – a point which the study fails to note.) Although these examples seem incongruous and even contradictory, they can be explained by understanding that the auditory receptive areas of the brain compartmentalize and interconnect sound along a scale of increasingly complex informational content.
The scale that I am referring to works along three principles: To begin with, sounds are delineated “according to the information content of the message”, that the “informational `bit'” in speech sounds occur in greater diversity than non-speech sounds, that non-speech sound are acoustically closer to noise than they are to speech sounds, and finally, that each grouping does not function solely in tandem with each other or with other cognitive processes, such as semantic comprehension. (Brown, 1972: pp.140) In order to clarify the last principle, I must add that although perfect speech comprehension relies on the interworking of many distinct functions, it is wrong to believe that by extensive damage to one area of Wernicke’s area would destroy all levels of sound comprehension.
As far as the actual scale, it progresses along these lines (going from groupings with the least informational content to those with the most): similar meaningless sounds, dissimilar meaningless sounds (e.g. noises), similar meaningful (non-verbal) sounds (e.g.tones), dissimilar meaningful (non-verbal) sounds (e.g. familiar melodies), similar meaningful (verbal) sounds (e.g. phonemic pairs), dissimilar meaningful (verbal) sounds (e.g.speech sounds.) (Brown,1972: pp.140) What this scale helps to explain is that the reason why patients with sound agnosia experience better performance with speech sounds than with non-speech sounds is that given “the more highly differentiated nature of speech sounds,” these patients are able to make use of “a greater abundance of cues.” (Brown,1972: pp.140)
At this point, I must make a very delicate transition from the interrelation of sounds with each other to their association with semantic processes. For all purposes, the various levels of sound interact within the sphere of the auditory associative centers, whereas lexical analysis encompasses more and higher levels of the brain. However, what we have noted in terms of sound analysis, can be extrapolated into the study of speech comprehension, as I will show in the next part of the paper.
A key function of my review of aphasic patients is to give evidence the brain does not process speech solely by means of “access[ing]… semantics… by a phonological code [and then accessing] output…by semantics” (Petersen et al, 1988.) Rather, what aphasic patients demonstrate, is that humans rely on parallel analysis of speech, which Scholes (1978: pp.164) proposes is decoded along the twin paths of lexical and “feeling-tone” analysis. According to Scholes (1978: pp.166), once the auditory centers of the brain receive sound information with are perceived to be meaningful verbal sounds, these are temporarily stored. Thereafter, a “preliminary segmentation and classification” takes place (Scholes, 1978: pp. 166.) Once this is achieved, the brain takes two serial, independent paths. The first, the inference rules, decodes the speech based of information that is yielded “on the basis of morphological (e.g.inflections) and acoustic information.” (Scholes, 1978: pp. 166.) The second path, the projections rules forms semantic meaning from speech with has been processed through the extrastriate cortex, and has thus acquired a lexical base through visual associations. (Petersen et al, 1988.) The inference rules path can “hear” the “feeling-tone” of the word, whereas the projections rules path can “visualize” the word as if it had been read. As soon as each path has distilled the speech into separate deep structures, these are compared. If the meanings that the two paths have derived do not check out to be identical, the deep structure is inverted and retried; however, if a match is made, speech comprehension has been attained. (Scholes, 1978: pp.174)
Whether and in what manner this inference path exists is still a matter of debate, but studies of aphasics provide a clearer answer. So far, this paper has shown how sounds are stored within the auditory associative centers of the brain, and that the rigid segmentalization of the various types of sounds are the results of an innate ability of the human brain. Furthermore, it has been shown that each level of sound differentiation works, to a great extent, independently of its subordinate levels. Also, a theory, which forms the kernel of the thesis, has been put forth that “feeling-tone” and syntax/lex decode independently of each other, or at least, that a “single measure of [speech] `comprehension’ is inadequate [for] any serious treatment of linguistic behavior.” (Scholes, 1978: pp.174)
Up until this point, however, little mention has been made as to how people come to have “feeling-tone.” The answer, simply put, is that we acquire it, as surely as we acquire any other elements of comprehension. As children, we are very “feeling-tone” deficient. As Scholes (1978) scheme points out, feature analysis is dependant on memory, therefore children are at a distinct disadvantage when it comes to inference rules analysis. (Cermak, 1978: pp.277) In the same way that sound loss in sound aphasics is directly related to the amount of differentiation in each sound group, speech comprehension is directly related to the amount of features that an individual “remembers” how to pick out. Therefore, while victims of childhood aphasia must draw upon a poor body of feeling-tone memory, adult who acquire aphasia can rely on a lifetime of memory to match up against incoming speech. This is quite similar to the difficulties that deaf children experience learning language. (Goodglass, 1983: pp.25) Even in adult aphasia, the type of speech with is easiest to decode involves personal matters such as discussions of family members, recent medical problems, recent personal events or, alternately familiar geography. (Goodglass, 1983: pp. 97) In other words, aphasics achieve greatest success with often repeated material (higher and stronger differentiation of features,) personal matters (more attention paid to features) or recent events (features are fresher.) This, of course, is not to say that aphasics cannot understand “natural” speech (that is to say speech that does not involve structured phrasing, or familiar topics.) Although aphasics do poorly on tests in which they are asked to identify lone words, their understanding of full sentences is remarkable.
When Hughlings Jackson compared aphasics to dogs, he was thinking in terms of their linguistic incompetence, and not due to their remarkable sensitivity to feeling-tone. (Sacks, 1970: pp.77) The abilities of Aphasics have long been disdained by even researchers within the field, but the study of non-lexical speech comprehension has immediate and rewarding applications. For instance, in order to aid research into artificial intelligence, programmers might do well to compare aphasics to computers. Although, both agnosics and computers are slaves to rigid syntax, and are incapable of adding to their vocabulary by forming a definition of an unknown word through context of other accompanying information, this is not true with aphasics. In my opinion, since the acceptance of natural speech is one of the major goals in the pursuit of an “intelligent” computer, it would be better to study aphasia as an excess of auditory decoding instead as a loss of linguistic comprehension. By teaching a computer to pick up on the same “informational `bits'” or “cues” as an aphasic, researchers could create systems that “saw,” not the deep syntactic structure of the projection rules, but the visual or auditory “shape” of the word. The process of identifying “word-shape,” can be explained visually by the way the brain reads the word “yellow.” “Yellow” has a distinctive “look” to it; it begins “with a descender and has a matching pair of ascending letters in the middle which the allows the brain to match it against its semantic value. (Howard,1987: pp.30) It is not difficult to imagine that such a system of decoding exists in auditory terms by analyzing the pattern of the individual phonemes that make up a spoken word.
The creation of a feeling-tone senstive computer, capable of truly interacting with humans, be a revolutionary tool in the expansion of computers into the home and business. Furthermore, by understanding the “cues” which people pick up in speech, people in the field of mass media, politics and teaching would be able to tailor their messages in order to create stronger reactions to, and more retention of, the subject matter.
Although Human speech communications possess the added scope of lexical comprehension, feeling-tone provides not only the flavor and nuance that words alone cannot carry, but the diversity of informational content needed in order to successfully decode information. In this way, we all rely on the same visual and auditory cues that aphasics employ. It would not be correct to assume that feeling-tone is some sort of back-up program that springs to action when the main system is down. Not only is feeling-tone continually decoded along the inference rules path, but as far as comparing it to projection rules decoding, “reaction times [are] usually within the… [same] range and reasonably stable.” (Tyler, 1987: pp.160) Furthermore, the use of feeling-tone occurs in all conversation, and assumes a greater worth not only through physiological limitations such as deafness or aphasia. Occasion also dictates the degree to which we realize that we are employing feeling-tone. As anyone who has held a conversation with someone from across the room, with a dentist, or in a crowded bar has noted, no words need be distinctly heard in order to grasp what is being communicated. To an even greater degree, no sound communication would be possible without a well developed sense of feeling-tone as Dr.Oliver Sacks’ unfortunate patient Emily D., who suffered from tonal agnosia, discovered. Unable to discern expression in voices, and suffering from a malignant glaucoma which prevented her from watching facial expressions, posture or gestures, she was left with only lexical and syntactic comprehension. What she discovered was that everyday speech was almost impossible to understand due to the inclusion of slang, bad sentence construction, allusive and emotional speech, or improper word use. (Sacks, 1970: pp.80) It could be said that Emily D. found herself in a world where people were incapable, by the definition of a literate society, of using language.
Bibliography:
Brown, Jason W., M.D. 1972. Aphasia, Apraxia and Agnosia, Clinical and Theoretical Aspects. Sprinfield: Charles C. Thomas.
Cermak, Laird S. 1978. The Development and Demise of Verbal Memory. In Caramazza/Zurif, ed., Language Acquisition And Language Breakdown, 277-289. Baltimore: John Hopkins University Press.
Eggert, Gertrude H. Ph.D. 1977. Wernicke’s Works on Aphasia. Paris, The Hague, New York: Mouton Publishers.
Goodglass, Howard. 1983.
Howard, David. 1987. Reading Without Letters? In Coltheart/ Sartori/Job, ed., Cognitive Neuropsychology of Language, The., 27-58. London: Lawrence Erlbaum Associates.
Lyons, John. 1981. Language Meaning and Content. Suffolk: Fontana Paperbacks.
Marler, Peter. 1973. Speech Development and Bird Song: Are There Any Parallels? In George A. Miller, ed., Communication, Language, and Meaning: Psychological Perspectives, 73- 83. New York: Basic Books.
Petersen et al, 1988. Positron Emission Tomographic Studies of the Cortical Anatomy of Single-Word Processing. Nature, February 18th, Vol. 331, 586-589.
Rheingold, Howard. 1988. They Have A Word For It. New York: St.Martin’s Press.
Sacks, Oliver. 1970. Man Who Mistook His Wife For A Hat, The. 76- 80. New York: Summit Books.
Scholes, Robert J. 1978. Syntactic And Lexical Components Of Sentence Comprehension. In Caramazza/Zurif, ed., Language Aquisition And Language Breakdown, 163-194. Baltimore: John Hopkins University Press.
Tyler, Lorraine K. 1987. Spoken Language Comprehension in Aphasia: A Real-Time Processing Perspective. In Coltheart/ Sartori/Job, ed., Cognitive Neuropsychology of Language, The., 27-58. London: Lawrence Erlbaum Associates.