Ambiguity as a Species Defect

 

Ambiguity as a Species Defect

 

 

Ambiguity in natural languages is commonly regarded as a lapse from perfection. A perfect language would not contain ambiguity. Why is this? Because language is used for communication and ambiguity impedes communication. If an utterance is ambiguous, it is harder for the hearer to figure out the intended meaning; and in many cases it is not possible to do this without further questioning. Language is then failing in its purpose (or one of them), which is to convey information quickly and effectively. Ambiguity is the enemy of understanding. If we had invented language from scratch, or were constantly reinventing it, we would be thought guilty of poor craftsmanship—creating a defective product. Ambiguity is clearly not a necessary and unavoidable feature of language, since invented languages are often designed to be free of it. We can construct languages that contain no lexical ambiguity or syntactic ambiguity, as with standard formalized languages. So the defect of ambiguity is a contingent feature of natural human languages not a necessary feature of languages as such.[1]

Nor is the problem local or confined; a typical human language such as English is rife with ambiguity. Often we don’t notice it because the intended reading is so salient, but the formal structure of the language generates ambiguity all the time. The classic “I shot an elephant in my pajamas” is ambiguous in a characteristic way, i.e. it is not clear whether the modifier “in my pajamas” applies to the speaker or the elephant. The sentence “Old friends and acquaintances remembered Pat’s last visit to California” is said to have 32 different readings. The Chomsky favorite “Flying planes can be dangerous” has infinitely many counterparts (e.g. “Dating women can be dangerous”). Syntactic ambiguity is pervasive and prodigal. Thus we must be constantly on our guard against it for fear of failing to express our meaning. Language is an ambiguity trap that easily lures us into error. We are always in danger of failing to communicate given the formal nature of the vehicle. It could have been worse—our every utterance could have been dogged by ambiguity—but things are bad enough as it is. And yet ambiguity is not integral to the very nature of language. Apparently we have been sold a shoddy product, one expressly constructed to get in the way of communicating.[2]

Why is this? Why is human language so defective? The question acquires bite when we acknowledge that language is a biological phenomenon: it is an adaptation shaped by natural selection, encoded in the genes, part of our birthright as a species. It is as if we have all been born with a defective heart or liver that does its job only fitfully and inefficiently. True, our bodily organs are not perfect—they can become diseased and break down—but they are not like our language faculty, which has the defect of ambiguity built right into its architecture. So the question must arise as to how such a defective biological trait originated and why it has not been improved upon over time. One would think there was some selection pressure against rampant ambiguity—that it would have been remedied over time. Yet there is no reason to believe that human language is moving towards less ambiguity, lexical or syntactic. It seems content to remain stuck in its current lamentably ambiguous condition. This is a puzzle: why does ambiguity exist, especially on such a large scale, and why does it persist? It looks like a design flaw of major proportions, so why is it biologically so entrenched? Why don’t we speak unambiguous languages? Why do constructions like “flying planes” exist at all? It is doubtful that comparable ambiguities afflict the languages (communication systems) of other species such as bees, birds, whales, and dolphins; and it would be bad if that were the case given that such languages are crucial to survival. So why does ourspecies settle for anything so rickety and unreliable?

First we must recognize that this is a genuine puzzle—it really is strange that human language is so riddled with the defect in question. Why isn’t there a simple one-one pairing between sign and meaning? Why is the connection between sound and sense so loose? Let me compare linguistic ambiguity with what are called ambiguous figures, the kind found in psychology textbooks (e.g. the Necker cube or the duck-rabbit). These are aptly described as cases in which a given physical stimulus can be interpreted in two different ways—hence “ambiguous”. So isn’t the problem of ambiguity found outside the case of language, and isn’t it really not that much of a problem? But these cases are relatively rare and confined: they are generated by psychologists drawing sketchy pictures on pieces of paper. Seldom do we find anything comparable in nature: it is not as if vision by itself is constantly generating such ambiguities.[3]We might wonder whether a patch of shade yonder is a black cat or a shadow, but such cases are not common and don’t generally disrupt the purpose of vision. It is not that vision is biologically constructed so asto lead to such uncertainties of interpretation. But in the case of language the problem is endemic and structural: ambiguity is both common and practically consequential. If vision were as prone to ambiguity as language, we would find ourselves in trouble (imagine 32 ways to see a snake, most of them not as of a snake). Ambiguity in vision is sometimes a problem, but it is not ubiquitous enough to thwart the purpose of vision (i.e. gathering accurate information about the environment); if it were, we would expect natural selection to do its winnowing work. But ambiguity in language really is a practical problem, as well as an inherent design flaw: it cuts at the very heart of communication. The question, “What did she mean?” can be pressing and momentous. And the reason for ambiguity in vision is obvious enough: vision is an interpretative, hypothesis-generating process, proceeding from an often-exiguous basis in the stimulus environment, so it must sometimes boldly venture alternative hypotheses. But language has ambiguity built into its syntax, its rules of sentence formation. It is constitutionally ambiguous.

Sometimes a biological trait has a defect as an inevitable side effect of an adaptive characteristic. Thus it is with the human bipedal gait and large brain, or the giraffe’s elongated neck—there is a price to pay for the benefits conferred (in fact, this is true for all traits given that they all require nutritional upkeep). We can see this principle in operation in the case of those ambiguous figures—ambiguity as the price of inference. So could it be that the ambiguity of natural language results inexorably from some super-advantageous design feature? Suppose it resulted from the property of infinite productivity: you can only have that brilliant property if you alsohave some concomitant ambiguity. The trouble with this suggestion is that there is no obvious move from productivity to ambiguity—why should the former entail the latter? Mere combinatorial structure also doesn’t lead to ambiguity. Artificial languages are productive and combinatorial, but they don’t contain ambiguity. Nor can hypothesis generation be the explanation: true, we have to infer what someone means from the words he utters (along with context), but it is the words themselves that bear an ambiguous relation to meaning. The question is why language permits constructions like “Flying planes can be dangerous” to begin with. Why not just have the sentences, “Flying in planes can be dangerous” and “Planes in flight can be dangerous” (though the former sentence admits the reading, “Flying around inside of planes can be dangerous”)? The fact that the hearer is engaged in an inferential task doesn’t explain why ambiguity of this kind is so rampant and inbuilt. So it is hard to see how it could be a by-product of some desirable design feature; it looks like the product itself. If we just consider quantifier scope ambiguities, we see how inherent to natural languages ambiguity is—and that it is easily removable by some device equivalent to bracketing. So why does our language faculty tolerate it? Why not just clean up the mess?[4]

One possible explanation is that human language is so spectacular an adaptation that it can afford many rough edges and design failures (compare early wheels). It is sogood that it can afford to harbor some vices—it’s still better than having no language at all. This may be backed up by the observation that human language is a recently evolved trait still going through its awkward adolescent phase—eventually it will mature into something more streamlined and fit for unqualified celebration. There may be something to this point, but it doesn’t remove all aspects of the puzzle, because it doesn’t tell us why language evolved with this defect to begin with when better options were in principle available, and there seems no evidence of any movement away from ambiguity heretofore. It might be suggested that ambiguity is like vagueness: it’s not a good thing, to be sure, but tolerable when the alternative is no language at all. I won’t consider this kind of answer further here, because it is difficult to evaluate without further evidence; but perhaps mentioning it serves to highlight the lengths we would need to go to in order to find an answer to our question. I don’t think we would find it plausible that the reason for intermittent blindness is that it is better to have occasionally blind eyes than none at all, but that is essentially what is being proposed by the explanation suggested—communicating by language is such a marvelous gift that serious defects in it can be lazily overlooked by the evolutionary process. Ambiguity is really not like the retinal blind spot. What if our language faculty enabled us to parse and understand only half of what is said? That would be rightly regarded as a grave defect, for which there is no obvious explanation. But ambiguity is rather like that—it really does impede successful communication. And even when it doesn’t, there have to be mechanisms and strategies that enable us to avoid its snares—it’s always less effort to understand an unambiguous sentence than an ambiguous one. Processing speech is certainly not aidedby ambiguity. It’s not a blessing in disguise.

It is an interesting question where human language will be in the distant future. Will its present level of ambiguity survive or will it become more perspicuous? Are we now placed on a linguistic path that cannot be altered? What would it take to impose selective pressures on the ambiguity-producing structure of our grammar? At present we have an evolved capacity that tolerates rampant ambiguity, yet functioning well enough to get by in normal conditions; but the architecture is fundamentally unsound, allowing for forms of words that could have many meanings apart from the one intended. Language should make things easier for the speaker and the hearer than it now does.[5]

 

Colin McGinn

 

 

 

 

[1]I have found only one paper dealing with the question addressed here (though I am by no means expert in the linguistics literature): “The Puzzle of Ambiguity”, by Wasow, Perfors, and Beaver. There is no date on the paper or place of publication (I found it on the internet), and to judge from its content the problem it discusses is not generally recognized. I welcome any information about other published work on the subject.

[2]I won’t discuss whether ambiguity exists in the underlying innate language prior to its expression in a particular sensory-motor format. It may be that all ambiguity exists at the level of spoken speech and results from the demands imposed by this medium; there may be no ambiguity at the more abstract level of universal grammar. I remain agnostic on this question.

[3]“Is it a bird? Is it a plane? No, it’s Superman!”

[4]It might be thought that ambiguity is useful as a means of concealment: say something that some people will take in one way while conveying a different message to others. But this is not a good explanation for why human languages are ambiguous in just the ways they are. It is grammatical rules themselves that allow sentences to be both grammatical and ambiguous, not pragmatic considerations of the kind just mentioned. Ambiguity surely didn’t evolve as a means of selective deception.

[5]Philosophers are apt to speak of natural language as logically imperfect, as judged by the standards of some ideal language; but ambiguity makes natural language biologically imperfect, because the biological function of speech is communication and ambiguity gets in the way of that. Imagine if a given monkey cry could mean either “Predator nearby” or “Food in the offing”! Compare “Get thee to the bank!” said by someone in the vicinity of both a river and a lending institution.

Share

New Company: Philosophical Applications

I would like to announce the formation of a new company by myself and partners. It’s called Philosophical Applications and the website can be accessed under applyphil.com. I won’t explain it here because it is better explained there. You can go to the website by clicking on the Consulting button on this website. Any comments welcome.

Share

Reading Jane

I’ve just finished reading all six of Jane Austen’s novels, beginning with Persuasion and ending with Northanger Abbey. These two (along with Mansfield Park) are often regarded as inferior to the big three of EmmaPride and Prejudice, and Sense and Sensibility. But I must say I didn’t have that reaction: I loved them all equally, each in their different way–because the author shone through so luminously. I won’t even attempt to enumerate their virtues for fear of litotes. Indispensable reading. Enough said.

Share

Grammatical Life

ACADEMIA

Colin McGinn: The Meaning of Life — Grammatical Life

IMG_0767Excellence Reporter: Prof. McGinn, what is the meaning of life?

Colin McGinn: I prefer to say that life is meaningful rather than that it has a meaning. It has a grammar, but no semantic interpretation. There is coherent form, but there is nothing outside human life that confers meaning on it (God, the Universe, Truth, the Good).

What are the components of this grammar?

It has an intellectual component, an aesthetic component, and an athletic component. For me these are mainly philosophy, music, and tennis (though other intellectual, aesthetic, and athletic components play a part). These are essentially activities—rule-governed creative activities. Not observing but doing. Philosophy follows the rules of logic, music is based on repeating patterns, games and sports have their constitutive rules. Yet all are creative, requiring effort and dedication (as well as talent): they are freedom disciplined by form. Nothing outside of them gives them significance; they are meaningful in themselves.

As you acquire mastery of language during childhood and later, so you work to acquire mastery of these categories of life-grammar. You try to speak and write well: similarly, you strive to have intelligent thoughts, good musical technique, and a beautiful backhand. This involves learning from others as well as constant practice. It requires internalization and externalization, competence and performance, inner knowledge and outer act. Living well is a skill.

It is important to combine these components, not to focus on one and neglect the others. Each thrives in combination with the others. A day is like a paragraph that brings these elements together—or a long well punctuated sentence (think Jane Austen). The intellect must be productively engaged, music played, a sport honed. Then the day will be grammatical and not ill formed—meaningful not nonsensical or incomplete. The human language faculty combines semantics, syntax, and phonetics; the human life faculty combines thinking, art, and sport. There is nothing outside language that gives it meaning (nothing “transcendent”), and yet it is meaningful; and there is nothing outside these human activities that gives them meaning, and yet they are meaningful. Meaningfulness is immanent, in the thing itself not hovering over it.

And just as we are born to speak meaningfully, so we are born to live meaningful lives—we are miserable otherwise. (Indeed, I would say that speaking meaningfully is part of what makes life meaningful: for language is implicated in thought and communication, and is itself a wonderful thing.) These strivings for meaning are part of our innate nature, so that the potential for meaningful lives is in us from the start. To what degree we can achieve these aims, as the world currently exists, is another question.

***

~Colin McGinn was educated at Manchester University (Psychology, BA and MA, 1972) and Oxford University (Philosophy, B Phil, 1974). He went on to teach philosophy at University College London, Oxford University, and Rutgers University, with visiting positions at UCLA and Princeton. He has published 25 books and many articles and reviews, including Moral Literacy: Or How to Do the Right Thing, Ethics, Evil and Fiction, Philosophical Provocations, and Sport: a Philosopher’s Manual. He has written for many publications including The New York Times, The Wall Street Journal, The Washington Post, The LA Times, New York Review of Books, London Review of Books, Nature, and others. He has been interviewed many times (e.g. by the Times of London) and appeared on several TV shows (e.g. Bill Moyers). He has worked as a philosophical advisor to George Soros. He is an internationally acclaimed philosopher and teacher. Presently, he is Chief Philosophical Executive of the new consulting company Philosophical Applications.
www.ColinMcGinn.net

Copyright © 2019 Excellence Reporter

Advertisements

REPORT THIS AD
REPORT THIS AD
ADVERTISING

Share

Speechless Language

 

Speechless Language

 

 

Normally when a human being learns a language he or she learns to speak and be spoken to. Sounds are produced and understood. An acoustic ability is acquired. But this is not always so: some people learn language (e.g. the English language) without the aid of sound. They neither hear sound nor produce it. Instead they rely upon vision and gesture (or writing). Their language ability is not notably inferior to those that speak and hear. What does this tell us about human language? What, in particular, does it show about the initial state of the human language faculty?

Presumably there is no analogous phenomenon in the case of other species that use language (or a communicative symbol system). A deaf and mute bird doesn’t cleverly exploit its eyes to substitute for hearing sounds, resorting to a sign language or the written word. Similarly for whales, dolphins, and bees. For these species if you can’t speak you don’t have a usable language (dance in the case of bees). The innate language faculty is specifically geared to speech—to a particular sensory-motor system. There is no flexibility in mode of expression and reception, unlike with humans. Does this mean that human language ability is intrinsically purely cognitive? Is speech just a learned add-on to innate linguistic competence? We learn to speak in a particular accent in a particular language, but this is not a matter of innate endowment—is it the same for the sense modality we adopt? Each of us couldhave learned to communicate by sight and gesture, and without much difficulty, so is the human language faculty neutral with respect to sense modality? Is it just a convention or accident that we end up speakinglanguage? Could it even be that our language faculty initially evolved as a visual-gestural system and only later became connected to our ears and vocal organs?[1]What if mostpeople used a non-auditory medium for language—wouldn’t we then suppose that this is the “natural” way to communicate? We have chosen the acoustic route, but we could have gone visual without loss or inconvenience. Is the language faculty inherently indifferent to its mode of externalization? It certainly isn’t indifferent to syntax and semantics, but phonetics seems like one option among others.

It seems true to say that human language (unlike the language of other species) is more of a cognitive phenomenon than a sensory-motor one. For one thing, we use language in inner monologue not just in communication with others (I doubt this is true of bees and whales). The structure of language is a cognitive structure that can be present in a variety of sensory-motor contexts. But it would surely be wrong to suggest that we are not genetically disposed to speak: speech is biologically programmed in humans and it follows a fixed maturational schedule. Human speech organs are designed to aid speech; they are not just accidentally coopted for this purpose.[2]We don’t needto use these organs in order to master language, but it is surely naturalthat we do—it is certainly not a conscious choice!  So is the human language faculty inherently acoustic or not? Neither alternative looks very plausible: it is possible (easy) to learn language without sounds andwe are built to favor sounds. One might suppose that the case is somewhat like walking on the hands when born without functioning legs—an option of last resort. Do the deaf feel an inclination to speak and listen as infants, but find they cannot, and so resort to sign language? That doesn’t appear to be the case—they take quite naturally to the visual medium. There is certainly somethingmodality-neutral about human language. On the other hand, we are clearly designed to speak—as we are not designed to play cricket.

Here is a possible theory: humans have twolanguage faculties, one cognitive, and the other sensory-motor. Call this the dual capacity theory. Both are innate and genetically coded, but they can be disassociated, as they are in the deaf. We are familiar with the idea of distinct components in language mastery—the semantic component, the syntactic component, and the phonetic component—well, there are actually two linguistic faculties coded into our genes. This idea will not surprise those who favor the notion of a language of thought: this language might exist separately from our language of communication in our mental economy. They might not even have the same grammar. What the dual capacity theory suggests is that the faculty we use when we speak is itself divided into two—and the deaf use one of them but not the other. They use the same innate grammar as the rest of us, but they don’t use the same sensory-motor system (though there is no reason to deny that it is programmed into their genes). The eliciting or triggering stimulus for normal language development isn’t operative in their case, but they use exactly the same internal schematism. This explains why their language skills are comparable to the sound dependent, while not denying that speech is the natural human condition (in a non-evaluative sense). That is, we are born to speak, but we don’t have to in order to master communicative language. There are two separate psychological modules. It would be possible in principle to retain the sensory-motor module while lacking the cognitive module, so that articulate speech is possible but there is no real understanding of the principles of grammar (this would be like those “talking” parrots).[3]Thus there can be double dissociation. Quite possibly the two modules evolved separately: maybe the cognitive module initially evolved as an intrapersonal aid to thought, to be followed later by a communicative faculty that recruited the older faculty. We tend to speak of thelanguage faculty, as if we are dealing with a unitary structure, but in fact there are two of them—there is more structure here than we thought. The cognitive faculty has nothing intrinsically to do with speech, though it obviously gets hooked up to speech during ontogenesis, while the sensory-motor faculty has everything to do with speech. No such duality obtains in the case of other linguistic species, which is evidenced by the fact that deafness spells an end to language ability for them. At its core, we might say, human language is not a sensory-motor capacity—though there is nothing wrong with saying that speech embodies linguistic competence. We really have two kinds of competence (and two kinds of performance): competence in the universal principles of grammar, possessed by the hearing and the non-hearing alike; and competence in the production and perception of speech. The former has nothing intrinsically to do with the ears and vocal organs, while the latter is dedicated to that sensory-motor system. When it is said that a language is a pairing of sound and meaning that is strictly speaking inaccurate (witness sign language), but it is true enough that the understanding of speechis such a pairing. Clarity is served by firmly distinguishing language and speech, but there is no need to deny that speech is the operation of a language faculty. To put it crudely, “language” is ambiguous.

The case might be compared to memory. We speak loosely of “the faculty of memory” but enquiry reveals that different things might be meant—there is not a single faculty of memory. There is long-term memory and short-term memory (and maybe others): these memory systems operate differently, permit of double dissociation, and no doubt have different genetic bases. Both are rightly designated “memory” and they have clear connections, with neither deserving the name more than the other, but they are distinct psychological faculties. Similarly, “language” applies to two psychological faculties, which can be dissociated, and which recruit different kinds of apparatus. When someone makes a general statement about “language”, we do well to ask him what human faculty he is referring to–speech or the more general capacity possessed by the deaf. Indeed, even that is too parochial, since we can conceive of language users who don’t have sight either but communicate by means of touch: they too have mastery of the grammar of human language (both universal and particular), but they don’t hear or see the words of language—they feelwords (and cause others to feel them too). Their underlying linguistic competence is more “abstract” than any particular sense modality: but so is ours, despite our saturation in the acoustic. What is truly universal in human language is this abstract faculty that exists in people with different modes of expressing it—universals of speech are relatively confined.

Once we have made this distinction we can distinguish different domains of study: are we studying the universal abstract language faculty or are we studying its expression in specific peripheral sensory-motor systems? What is called “psycholinguistics” could be about either of these. Which properties of language belong to which faculty? No doubt the type of externalization will impose specific conditions on the form of what is expressed, but there will probably be universal patterns found across all modes of externalization (subject-predicate structure, say).[4]The temporal dimension of speech will affect its structure, along with the memory limits that accompany this, while the recursive property is likely to stem from the internal universal language. Combining phonemes is not the same as combining the lexical elements that constitute the common human language. Particularly intriguing is the question of maturation: do the two language faculties develop in the same way and at the same time? It could be that the internal language develops more rapidly and serves as the foundation for the development of speech (or sign language). It is not constrained by motor maturation and may be more “adult” than its external counterpart. If we think of language development as a process of differentiation, it may differentiate at a different rate from external speech—and proceed from a different basis. It may permit inner speech before the onset of outer speech. We certainly can’t infer its maturational schedule just by observing the growth of outer speech. With respect to evolution, it may be that the cognitive language faculty evolved much earlier than the vocal language faculty, which is thought to be relatively recent (about 200,000 years ago). We might have been using language for much longer than we have been speaking it. The larynx is a late accretion to language use, and a dispensable one.[5]

 

Colin McGinn

[1]I consider this hypothesis in Prehension(2015).

[2]Caution: not originally so designed—vocalization long preceded speech in humans—but refined in the direction of speech since speech began (compare hands and tools).

[3]It is a question how language-like the sensory-motor system would be without the backing of the cognitive system. Subtracting speech from the human subject leaves language intact, as shown by the deaf, but what if we subtract the internal language faculty from the activity of speech?  Would we still have full productivity? Would grammar really exist for the sounds that emanate? This is an empirical question and not an easy one to answer. My suspicion is that we would get substantial degradation, but it may be that humans have evolved a good deal of autonomy in the speech centers of the brain, so that speech might exhibit many of the properties of the internal modality-neutral language faculty. Just as language ability is largely independent of general intelligence, so speech ability might be largely independent of cognitive-language ability. Certainly it is logically possible for there to be an autonomous faculty of productive grammatical speech in addition to a similar faculty for the inward employment of language—that is, one faculty for speaking and another for thinking in words.  The question is like the question of how much of perception would survive without cognition.

[4]Chomsky makes this point. The internal language could be a lot simpler, structurally, than external speech, because of the constraints imposed by the sensory-motor system. There might be no gap between deep and surface structure in the internal language, with no transformations linking them.

[5]To simplify somewhat, there are three possible positions: language is only speech (traditional linguistics); language isn’t speech at all (Chomsky today); language is both speech and something else (an internal cognitive structure) (me). These questions remain murky and it is helpful to open up the theoretical options, though the speech-centric position is surely indefensible. (I’m grateful to Noam Chomsky for helpful comments.)

Share

Differentiation and Integration

Differentiation and Integration

 

 

According to standard embryology, the process of ontogenesis is characterized by organic differentiation. From an initially homogeneous collection of cells tissues of various kinds are formed. This is no doubt powered by genetic instructions from within the original uniform cells. Maturation is thus a transformation from sameness to diversity as tissues develop in the body according to a fixed schedule. The end result is an organism composed of many organs each equipped with a characteristic type of cell and associated physiology—heart, lungs, bones, kidneys, etc. As differentiation occurs there is a need for coordination between the new types of tissue and the organs that tissue serves, and the adult organism clearly contains components that need to be integrated into a functioning whole. Thus there is no differentiation without integration (and vice versa). These are the twin pillars of biological development: genetically driven diversification and concomitant integration of the elements thereby generated.

This basic picture can be applied to language development. Initially there are just undifferentiated cooing sounds existing alongside the inarticulate sounds of crying, but these soon come to be replaced by language-like sounds (consonants and vowels) without structural complexity. These in turn are replaced by one-word sentences (“dada”, “milk”) that display the rudiments of language. Only later do these come to be combined into two-word strings, and subsequently into the full range of syntactic and semantic categories. The details don’t matter for present purposes: what concerns us is that language development follows a pattern of differentiation analogous to that undergone by the body.[1]A relatively formless initial state is gradually transformed into a highly structured system of elements that combine together. Anatomy develops by differentiation, but so does grammar. When the initial unstructured sentences are transformed into noun phrases and verb phrases there has been a process of differentiation comparable to the formation of heart and lungs (or the internal anatomy of each). This is not surprising once we accept that language is itself a biological phenomenon—an aspect of the human organism. And it makes logical sense: you derive an intricately structured organism from unstructured beginnings by a process of differentiation (it would be difficult to implement such differentiation in the sperm and egg). Language doesn’t emerge fully formed in the child but matures in the brain by a process of increasing structure and complexity. It grows by splitting into different functional units—that must nevertheless be integrated if they are to achieve their purpose. We are accustomed to the idea that language is a system built for integration—producing sentences by combinations of words—but we must also recognize that it is a system that arises by differentiation from something more primitive.[2]It was once a growing thing in the child and only reaches stasis after a lengthy sequence of differentiating stages—just like other human organs. Language is a product of gradual ontogenesis, which only later achieves its full combinatorial power. The adult language faculty enables integration ad infinitum, but at one time it was without much in the way of internal structure (relatively speaking). So let us add to the productive power of language its origins in a simpler form of living tissue. Grammar is the form that linguistic differentiation takes in human ontogenesis—just like anatomy and physiology. Linguistic differentiation is biologically at one with histological differentiation. Language thus follows the same basic pattern of organic growth. How else could the architecture of language arise?

But if this is true of language, isn’t it also true of the mind more generally? The various components (“organs”) of the mind must be integrated in order to function as a unity, but first they had to develop by some sort of maturational process. Perception, cognition, emotion, will—all need to emerge as distinct systems during ontogenesis, to be integrated later (or pari passu). But what was the initial state? Some may speak darkly of a blank state, but that would need to be supplemented by an account of how such a state could be progressively differentiated. Something has to turn into the various faculties of mind—some state of the pre-natal brain. About this process we know little to nothing, yet it must be so in some way. What is the analogue to the one word sentence or those even more primitive cooing sounds? And how did concepts emerge by differentiation? They were not present fully formed in the fetus’s brain but arose by a maturational process to become the vast combinatorial system we now deploy with such consummate ease. Presumably they arose by a process of differentiation: from William James’ “blooming, buzzing confusion” (whatever that means) to an articulated array of combinable elements. This maturational differentiation must be genetically driven, like other biological growth, but it results in a psychological faculty far removed from the initial state of the organism. Again, we know little to nothing about how this works, but we have good grounds for supposing a distinct path of differentiation and integration. One can certainly imagine that unstructured thoughts might with time transform into thoughts with a subject-predicate structure—and thence into more complex forms. First there were feature-placing thoughts (the mental counterpart of “It’s raining”) and later they turned into structured thoughts (along the lines of “There is heavy rain in London now”). Conceptual differentiation created the panoply of concepts we now take for granted.

Coordinating conceptual differentiation and integration is a highly non-trivial task, as it is in the case of the body and language. Once you have the plurality you need to keep it under control. The brain is the ringmaster here. Grammar is encoded in the brain and it sets the rules for combining words; in the case of concepts something analogous must be true—rules that combine concepts in certain acceptable ways (not just arbitrary lists or jumbles). So differentiation combined with integration requires rules or principles of coordination. The heart must be coordinated with the lungs to produce satisfactory aerobic performance; similarly concepts must be coordinated in the right way to produce intelligible thoughts. Integration doesn’t happen by magic. The more differentiation there is the greater the demands of integration. An organism with very simple thoughts doesn’t need much apparatus to keep its thoughts on track, but an organism like us relies on mechanisms that prevent thoughts from forming defectively or randomly. In aphasia these mechanisms can fail, leaving words to fail to link up correctly; in principle the same thing could happen to thoughts—concepts fail to link up to form coherent thoughts. It is surprising that more breakdowns of this kind don’t occur.[3]If there is a language of thought, there ought to be aphasia in that language if the brain is suitably damaged, which produces aphasic thought. The differentiation and integration of concepts will be tied to linguistic differentiation and integration in the language of thought.

The innate language faculty thus has two basic properties: (a) it permits unbounded productivity in its mature form, and (b) it enables a stupendous feat of differentiation as it guides the maturation of language in the child’s brain. It is as creative in the latter respect as in the former (though it doesn’t get as much credit for the latter). The adult lexicon is the product of maturational differentiation (how, we don’t know); sentence production is the outcome of integration rules. Both are built into the genetic blueprint for language. Nor does differentiation cease at normal linguistic maturity, since we continue to make linguistic and conceptual distinctions. The differentiation machine doesn’t go completely offline, its job done; it allows us to make ever-finer distinctions that aid thought. So it isn’t that one kind of creativity completely ends to be replaced by another; we are still able creatively to generate distinctions (though it doesn’t come as naturally as during childhood). Distinction making is as crucial to language development as the growth of the ability to combine existing elements. So I propose conceiving of the language faculty (and the conceptual faculty) as a union of differentiation and integration: it allows the combination of pre-formed elements, to be sure, but it also generates those elements by a (mysterious) process of differentiation. When abnormalities arrest language development we see in sharper outline the maturational stages speakers go through—we see how the differentiation process can be blocked (the same is true of human bodily growth). As adults we tend to forget this early history, but it is as essential to our mastery of language as the growth of the heart is to our survival. The language faculty is as much a creative product as it is a creative producer.

I said that differentiation and integration are the basic laws of biology (so far as concerns ontogenesis), but they are also relevant to evolutionary change. For what is species evolution but biological differentiation? Natural selection causes species differentiation (along with other factors), though mechanically not by pre-set program. There is no predetermined evolutionary schedule like the maturational schedule. Thus simpler forms evolve into more complex forms, splitting off to make a new kind of biological entity. Phylogeny recapitulates ontogeny. However, there is no real analogue to integration, since the different species don’t operate together to form a larger whole. True, it used to be thought that this might be so, as if each species had its functional role in the super-entity called Nature (“the biosphere”); but these days we tend to think that the entities that have arisen by evolutionary differentiation are independent entities subject to no coordinating principles. They are not like organs in a body or words in a language. There is no “grammar of nature”.  The evolution of species is differentiation withoutintegration.[4]

 

Colin McGinn

[1]This point of view is defended by Eric Lenneberg in Biological Foundations of Language(1967), esp. Chapter Seven.

[2]This is in no way incompatible with the nativist account of language acquisition: the genetic instructionsfor generating full-fledged language are present at birth, but that is consistent with the existence of a maturational schedule that involves cellular and cognitive differentiation. Similarly, instructions for building a heart at a certain maturational stage are present at birth, but it takes time for the actual organ to be constructed by a process of differentiation. Nouns and verbs emerge at a certain maturational stage (the second year of life), but the program for making them was written in the genes.

[3]Drunkenness can cause breakdowns of motor coordination analogous to earlier stages of motor development (a kind of regression), including speech difficulties; but it doesn’t appear that drunkenness can derail the performance of the language of thought—we keep thinking coherently as we stagger and trip, slur and mumble.

[4]Suppose there was a symbiotic parasite that entered the brains of host species and conferred language on the host—the parasite contains grammar. The parasite might instigate a series of maturational stages of language development in the host just like those of humans. It is functioning as an organ of the host’s body/mind—much as symbiosis in general has this character. This would provide a sense in which different species might operate as a unity.

Share

Phenomenological Ignorance

 

Phenomenological Ignorance

 

 

We can’t know what it’s like to be a bat. This is an instance of a more general truth: no one can grasp the nature of experiences that are radically different from their own. We can grasp the nature of experiences similar to our own, but we can’t grasp experiences that are qualitatively different from ours. We are ignorant of phenomenological facts that diverge from our own. Bats can know what it’s like to be a bat, and so presumably can dolphins, which employ a similar echolocation sense; but beings that have no such sense are in the dark about the experiences involved. It is the same story for the congenitally blind: they can’t know what it’s like to see—as the deaf can’t know what it’s like to hear, or the pain-free to understand what pain is, or the nasally challenged to appreciate smells, or the emotionless to know what anger is. In the realm of the phenomenological there are sharp constraints on what is knowable and by whom. You can’t even know what it’s like to experience red if you have only experienced blue. This is an epistemic limitation—a limitation on what can be known, understood, or grasped. It is not an absolute limitation—a reflection of the intrinsic nature of the fact in question—since it can be overcome by creatures that happen to participate in that fact; it is a relative limitation—Xcan’t be known by Y(though it can be known by Z). It isn’t universal ignorance but creature-relative ignorance.

The question I am concerned with is whysuch ignorance exists: what is its explanation? We have a kind of extrapolation problem: how do I move from knowledge of my own phenomenology to knowledge of the phenomenology of others? It appears that I can do this when there is similarity, but not when there is (radical) difference. My ability to extrapolate is blocked by dissimilarity. The question is why such extrapolation limitations exist. To see the problem let us review some cases in which there are no such extrapolation restrictions. Consider geometry: are we limited only to knowledge of shapes we have encountered? Are alien geometries incomprehensible to us? We have certainly not experienced all possible polygons, so what about those that lie beyond our geometrical experience? Here the answer is obvious: we are notso limited. To simplify, suppose a person never to have experienced rectilinear figures but only curvilinear ones, so that he has never seen a triangle (say). Does that mean he can’t understand what a triangle is? No, it can be explained to him perfectly well and he will thereby understand the word “triangle”. So while we can’t grasp a type of experience we have never encountered in ourselves, we can grasp a type of geometrical figure we have never encountered in the perceptible world. We can extrapolate in the latter case but not in the former. We don’t have acquaintance-restricted knowledge in geometry, but we do in phenomenology. There are gaps in our understanding where experience is concerned, but not where shapes are concerned. It is the same for animal species: you don’t need to have seen an elephant to know what an elephant is (or a bat). Elephants can be described to you, pictured, and imagined; and they don’t need to be similar to animals you have seen with your own eyes. You know what an animal is and you understand what kind of animal an elephant is by description. But you don’t know what kind of experience a bat has even though it has been described to you (based on echoes, having such and such brain correlates, etc.). Also: suppose you had never heard of odd numbers, having been brought up only to deal with even numbers. That would not prevent you grasping the concept of an odd number once someone explained it to you. There are no irremediable gaps in our grasp of numbers analogous to the gaps in our grasp of phenomenology. Likewise, our knowledge of astronomy is not limited by the extent of our acquaintance: we grasp the concept of remote and alien galaxies without ever experiencing them. But our general concept of experience doesn’t enable us to fill in the gaps in our acquaintance with experience: we can’t say, “Oh, bat experience is simply this” and feel that we know what we are talking about. Our knowledge of phenomenology is thus gappy in a way our knowledge of other things is not. We can’t use a form of induction to extrapolate to types of experience that we have not ourselves directly (introspectively) encountered. The question is why. And the question should seem pressing, because the epistemic limitation is so anomalous and local—in general, there are no such limitations on knowledge.[1]It is surprisingthat we don’t know what it’s like to be a bat.

We must canvas some putative explanations. One possible explanation is that experiences concern the mind, while the other cases I mentioned concern the non-mental world. But this explanation is inadequate because (a) some facts about the mind are not so limited and (b) there are facts about physical objects that are subject to the same limitation. I can understand what beliefs you have even though they are quite alien to me—their odd content isn’t an obstacle to my knowledge; and I can’t grasp a color that I have not seen if it is different from any I have seen. Color blindness will result in color ignorance, even though colors are perceptible properties of physical objects; but my unfamiliarity with crazy conspiracy theories isn’t an impediment to my knowing what weird belief is in question. So the epistemic limitation we are interested in isn’t just a reflection of a general truth about knowledge of the mind versus knowledge of non-mental things. But even if it were such an instance that would not answer our question, because that question would now shift to the more general question: how come we can extrapolate about things outside the mind but not things inside the mind? What is the sourceof that difference?

A more promising suggestion is that the realm of experience is simply less homogeneous than the realm of the physical (to speak loosely), so that it would involve greater cognitive leaps to extrapolate across this realm. Geometry is about essentially similar things while phenomenology includes very diverse things. But by what criterion is bat experience so different from (say) visual experience while circles and squares are deemed essentially similar? The concept of similarity will not bear this kind of weight. Some people have urged that bat experience is not really all that different from ours: it is a type of auditory experience for one thing, and for another it has many of the properties of visual experience (a distance sense used to navigate and locate objects in space). These points may be conceded while still insisting on the alien character of such experience: but then how are triangles and circles to be supposed moresimilar? One loses one’s grip on what notion of similarity is at issue here. There is really no objectivebasis for distinguishing the cases; the difference arises rather from our mode of knowledge in the two cases. The phenomenological realm is not objectively more diverse than the geometrical realm (or the mathematical realm or the zoological realm); it is rather that our method of knowing somehow differs—we find it easier to extrapolate in the one case than the other. But why is that?

Along the same lines it might be said that we are actually just as limited in geometry as we are in phenomenology, because geometry also includes extreme knowledge-blocking diversity. Thus non-Euclidian geometry might be said to differ dramatically from Euclidian geometry—as radically as bat experience differs from human experience—so that it is impossible to extrapolate from one to the other. Accordingly, we don’t really grasp non-Euclidian geometry, just as we don’t grasp non-human phenomenology. Since there is then no epistemological asymmetry between the cases, there is nothing to explain—no epistemological anomaly to account for. Alien geometry is as incomprehensible as alien phenomenology (and the same might be said for such things as irrational numbers or alien types of animal). The weakness of this position is that it is by no means clear that there is any epistemic limitation attending the allegedly alien types of fact. We dograsp non-Euclidian geometry (and irrational numbers and the platypus). So the epistemic asymmetry still exists in undiluted form. The puzzle thus persists as to what the basis of the asymmetry might be: why is it harder to know one thing than the other? What makes alien phenomenology peculiarly recalcitrant to understanding?

Here is a completely different approach: alien phenomenology is like alien language. Humans are born with a specifically structured language capacity that prepares them for the particular languages they will encounter, but it is not suitable for the acquisition of languages with a different kind of structure. The human language faculty will not work to produce knowledge of alien grammars—as it might be, non-discrete elements that combine according to quite different grammatical principles from those of natural human languages (no recursion, for example); or don’t combine at all. It is dedicated and differentially structured, not an all-purpose learning device. If you place a human infant in a linguistic environment that is radically alien, she will not end up with knowledge of the language in question. Suppose bats were to speak such a language: the human child would not come to know its grammar and speak it like a native upon exposure to that language. Linguistic knowledge is thus subject to epistemic limitation as part of its innate character. At best a person might laboriously decipher the grammar of a radically alien language and speak it awkwardly and unnaturally—rather as someone might develop an abstract and unintuitive conception of bat experience. So the suggestion is that the reason we don’t grasp bat phenomenology is that our innate phenomenology module isn’t designed to extend to types of phenomenology that are alien to our own. That is, our innate knowledge of phenomenology is restricted to types of mind whose phenomenological “grammar” matches our own. It is not that alien grammars are objectively more difficult or complex than human grammar; it is just that there is a bias built into the human language module that favors one type of grammar over others. From an evolutionary point of view, it is important for us to have a solid grasp of our own minds (“theory of mind”), so we are genetically equipped with such knowledge; but there is no biological reason to have a solid grasp of bat minds, so lacunae there are acceptable. It is not as if there has been natural selection operating on humans to improve their grasp of bat psychology! Human phenomenological knowledge is domain-specific and geared to our environmental niche, so it is simply not designed to cover bats and their ilk.

This theory of phenomenological ignorance has the look of what we are seeking, but it might be wondered whether it is strong enough to deliver the epistemic limitation that apparently exists. In the language case, as noted, it is possible in principle for us to overcome our innate bias and acquire knowledge of the grammar of an alien language, albeit laboriously; but is it possible in principle to come to know what it’s like to be a bat? Isn’t that limitation a lot harderto overcome? I don’t know the answer to this: I don’t know whether intensive training, especially during the sensitive periods of child learning, could yield intuitive knowledge of bat phenomenology. Certainly, given that the experiential modality is auditory, the building blocks are there, and maybe training in echo-navigation in the first few years of life could produce a sense of the structure and operations of bat experience (hearing aids would help). So the obstacle may not be insurmountable. Also, we can surely imagine beings that can’t overcome their linguistic bias and so can’t learn an alien language even in principle. So the cases might not be as all-or-nothing as they seem at first sight. The idea of an innate phenomenology module certainly seems intelligible enough, and it delivers an explanation of the puzzling asymmetry I have noted. Just as we have an innate module for our belief-desire theory of mind, so we have an innate module for our phenomenological theory of mind. We couldhave been born knowing what bat experience is like, as we are born knowing what human experience is like; but actually we aren’t and that produces the epistemic gaps in question. Our knowledge of geometry, arithmetic, and zoology is different, not being based on a selective module like the language faculty (or not asselective); but our knowledge of phenomenology is sharply constrained and not easily overcome (if at all). We are not, as they say, plasticwhen it comes to phenomenology.

Notice that according to this theory it is not really correct to suppose that our knowledge of phenomenology is based on our acquaintance with our own experience. That is an empiricist theory of such knowledge analogous to empiricist theories of knowledge of the external world (we possess concepts by abstraction of properties from perceived particulars). But the nativist theory of phenomenological knowledge holds instead that we have such knowledge independently of acquaintance with our experiences. We know what experiential types are without any such operation of acquaintance and associated abstraction, but innately. So the problem isn’t that what we abstract from own experience doesn’t fit the experience of bats, but rather that we are not innately equipped with a faculty of knowledge that includes the knowledge in question. In short, we don’t know bat psychology innately. No doubt acquaintance plays a triggering role in the production of phenomenological knowledge, but it doesn’t play an originative role (as with other innate knowledge systems). Maybe even producing actual bat experience in us wouldn’t itself be sufficient to acquire knowledge of the nature of bat experience, because that requires the cooperation of an innate faculty of phenomenological knowledge geared to bat experiences, which is absent in humans. In any case, an empiricist theory of phenomenological knowledge is by no means to be assumed (if it’s false generally, why should it be true here?).[2]

I want to emphasize not so much the proposed solution as the problem it is designed to solve. We know there are phenomenological facts in the world with a nature we can’t grasp, though other beings can; this poses a problem of explanation. The case looks very different from other types of knowledge. It is a non-trivial question why this is so, raising some deep issues. It challenges conceptions of knowledge that have been long entrenched, concerning how we know the nature of our own experiences. How exactly do I know what pain is, or experiences of red, or anger?

 

Colin

[1]I don’t mean there are no other limits to knowledge, just that there are no limits of the kind we find with respect to phenomenology, i.e. extrapolation problems across a single domain.

[2]It is an interesting fact that an empiricist theory of phenomenological knowledge is attractive even to people who are skeptical of an empiricist theory of knowledge of physical objects. I’m not sure why this is—is it because it is easy to conflate experience with knowledge of experience? In fact, actual bats may notknow what it’s like to be a bat simply because they lack this kind of introspective knowledge, despite possessing the corresponding experiences.

Share

Biological Philosophy of Language

 

 

Biological Philosophy of Language

 

 

Linguistics has grown accustomed to viewing human language as a biological phenomenon. This view stands opposed to two other views: supernaturalism and cultural determination. Ancient thought conceived of language as a gift from God, closely adjoined to the immaterial soul: this accounted for its origin, its seemingly miraculous nature, and its uniqueness to the human species (we are God’s chosen ones). Recent thought instead insisted that language is a cultural product, a human invention, an artifact: this too accounts for its origin, nature, and uniqueness to humans (only humans have this kind of creative power). Both views deny that language is a species-specific adaptation driven by natural selection and arising in the individual by a process of organic maturation—rather like other natural organs. The “biological turn” in linguistics maintains that language is not supernatural or cultural but genetically based, largely innate, founded in physiology, modular, a product of blind evolution, organically structured, developmentally involuntary, invariant across the human species, and part of our natural history. Biological naturalism is the right way to think about language.[1]No one would doubt this in the case of the “languages” (systems of communication) of other species like bees, birds, whales, and dolphins; human language is also part of our biological heritage and our phenotype (as well as our genotype). But this perspective, though now standard in linguistics, is not shared by contemporary philosophy of language: we don’t see thesequestions framed as questions of biology. Not that existing philosophy of language overtly adopts a supernatural or cultural conception of the nature of language in preference to a biological conception; rather, it is studiously neutral on the issue.  The question I want to address is whether the received debates in philosophy of language can be recast as questions of biology, in line with the prevailing biological perspective in linguistics. And I shall suggest that they can, illuminatingly so. I thus propose that philosophy of language take a biological turn and recognize that it is dealing with questions of natural biology (if the pleonasm may be excused). This will require no excision of questions but merely a reformulation of them. Philosophy of language is alreadysteeped in biology.

Let’s start with something relatively innocuous: the productivity of language. Instead of seeing this as a reflection of God’s infinite nature or the creative power of human invention, we see it as a natural fact about the structure of a certain biological trait, analogous to the structure of the eye or the musculature. Finitely many lexical units combine to generate a potential infinity of possible sentences—that is just a genetically encoded fact about the human brain. It arose by some sort of mutation and it develops during the course of individual maturation according to a predetermined schedule. It is humanly universal and invariant just like human anatomy and physiology. It should not be viewed as a purely formal or mathematical structure but as an organic part of the human animal. So when the philosopher of language remarks on the ability of speakers to construct infinitely many sentences from a finite set of words by recursive procedures he or she is recording a biological fact about the human species—just like bipedal posture or locomotion or copulation or digestion. Nothing prevents us from saying that the human phenotype includes an organ capable of unbounded productivity—the language faculty. It isn’t supernatural and it isn’t cultural (whatever exactly this means). It is, we might say, animal.

But what about theories of meaning—are they also biological theories in disguise? The biological naturalist says yes: truth conditions, for example, are a biological trait of certain biological entities. The entities are sentences (strings of mental representations—“words”) and their having truth conditions is a biological fact about them. Truth conditions evolved in the not too distant past, they mature in the individual’s brain, and they perform a biological function. Truth conditions constitute meaning (according to theory), and having meaning is a trait of certain external actions and internal symbols. Meanings are as organic as eyeballs. So a theoryof meaning is a theory of a certain biological phenomenon—a biological theory. It says that the trait of meaning is the trait of having truth conditions. Suppose we base the theory on Tarski’s theory of truth: then Tarski’s definition of truth for formalized languages is really a recursive theory of an organic structure. It is mathematical biology. Sentences are part of biology and their having truth-conditions is too; so a theory of truth is tacitly an exercise in biological description. No one would doubt this for a theory of bee language or whale language, because there is no resistance to the idea that these are biological traits—a theory of truth conditions herewould naturally be interpreted a theory of a biological phenomenon. Bee dances don’t have their truth conditions in virtue of the bee god or bee culture, but in virtue of genetically based hardwired facts of bee physiology. It isn’t that bees collectively decideto award their dances with meanings—and neither do human infants decide such things either. Sentences have truth conditions in virtue of biological facts about their users, whether bee or human. Semantics is biology.

Consider Davidson’s project of translating sentences of natural language into sentences of predicate calculus and then applying Tarski’s theory to them. Suppose that, contrary to fact, there existed a species that spoke only a language with the structure of predicate calculus; and suppose too that we evolved from this species. It would then be plausible to suppose that our language faculty descended from theirs with certain enrichments and ornamentations. Then Davidson could claim that their language gives the logical form of our language and that it can in principle translate the entirety of our language. This would be a straightforward biological theory, claiming that one evolved trait is equivalent (more or less) to another evolved trait. The “deep structure” of one trait is manifest in another trait. Likewise, if we view a formalized language as really a fragment of our natural language, then a claim like Davidson’s is just the claim that one trait of ours is semantically equivalent to another trait—that is, its semantic character is exhausted by the formalized fragment, the rest being merely stylistic flourish. For example, the biological adaptation of adverbs is nothing more than the surface appearance of the underlying trait of predicates combining with quantification over events. Thus we convert the Davidsonian program into a biological enterprise—to describe one trait in terms of other traits. This is the analogue of claiming that the anatomy of the hand is really the anatomy of the foot, because hands evolved from feet—just as our language evolved from the more “primitive” language of our predicate-calculus-speaking ancestors in my imaginary example. Our language organ is both meaningful and combinatorial, and Davidson has a theory about what these traits consist in: he is a kind of anatomist of the language faculty.

Then what is Dummett up to? He is contending that the trait of meaning is not actually the trait of having truth conditions but rather the trait of having verification conditions.[2]We don’t have the former trait because it has no functional utility so far as communication is concerned (it can’t be “manifested”). So Dummett is claiming that a better biological theory is provided by verification conditions. This is a bit like claiming that the function of the eye is not to register distal conditions but to respond to more proximate facts about the perceiver, these being of greater concern to the organism (cf. sense-datum theory and phenomenalism); or that the function of feathers is not flight but thermal regulation (as apparently it was for dinosaurs). Dummett is a kind of skeptic about orthodox descriptions of biological traits. He might be compared to someone who claims that there are no traits for aiding species or group survival but only traits for aiding individual or gene survival (“the selfish meaning”). Quine is in much the same camp: he claims that no traits have determinate meaning, whether truth conditions or verification conditions. The alleged trait of meaning is like the ill-starred entelechy—a piece of outdated mythology. A proper science of organisms will dispense with such airy-fairy nonsense and stick to physical inputs and outputs. For Quine, meaning is bad biology. Nor would Quine be very sanguine about the notion of biological function: for what is to stop us from saying that the function of the wolves’ jaws is to catch undetached rabbit parts? Our usual assignments of function are far too specific to be justified by the physical facts, so we should dispense with them altogether. We need desert landscape biology: no vital spirits, no meanings, and no functions, just bodies being stimulated and responding to stimulation—Pavlovian (Skinnerian) biology. Quine is really a biological eliminativist.

Where does Wittgenstein fit in? He emerges as a biological pluralist and expansionist. He denies that morphology is everything; he prefers to emphasize the biological deed. He forthrightly asserts that language is part of our “natural history” (not much discussion of genetics though).[3]The Tractatusemployed an austere biology of pictures and propositions, while the Investigationsplumps for a great variety of sentences and words as making up human linguistic life. Wittgenstein is like a zoologist who once thought there were only mammals in the world and now discovers that there are many types of species very different from each other. He also decides it is better to describe them accurately than try to force them into predetermined forms. His landscape is profuse and open-ended, like a Brazilian jungle. He is resolutely naturalistic in the sense of rejecting all supernatural (“sublime”) conceptions of language. What he would have made of Chomsky I don’t know, but he would surely have applauded Chomsky’s focus on the natural facts and phases of a child’s use of language. His anti-intellectualism about meaning (and the mind generally) is certainly congenial to the biological point of view.

What about Frege? Frege is the D’Arcy Thompson of philosophical linguistics, seeking the mathematical laws of the anatomy of thought. He discerns very general structures of a binary nature (sense and reference, object and concept, function and argument) and finds them repeated everywhere, like the recurrent body-plans of the anatomical biologist. The human skeleton resembles the skeletons of other mammals and indeed of fish (from which all are derived), and Frege finds the same abstract structure in the most diverse of sentences (function and argument is everywhere, like the spinal column or cells). But these abstract structures are not antithetical to biology, just its most general features. When a laryngeal event occurs it carries with it a cargo of semantic apparatus that confers meaning on it, intricate and layered. The speech organs are impregnated with sense and reference as a matter of their very biology, not bestowed by God or human stipulation (the underlying thoughts are certainly not imbued with sense and reference as a matter of culture). Thus it is easy to transpose Frege’s logical system into a biological key—whether Frege himself would approve or not. Again, we should think of the developing infant acquiring a spoken language: his words have sense and reference as a matter of course not as a matter of cultural instigation—this is why language precedes culture for the child. Acquiring language is no more cultural than puberty is cultural (and I have never heard of an ancient theory to the effect that pubertyis a gift from God). Meaning comes with the territory, and the territory is thoroughly biological.

Ordinary language philosophy? Why, it’s just ecologically realistic biological theorizing, instead of rigid attachment to over-simple paradigms. It’s rich linguistic ethology instead of desiccated linguistic anatomy.  It’s looking at how the human animal actually behaves in the wild instead of clinically dissecting it on the laboratory table. Austin, Grice, Strawson—all theorists of in situlinguistic behavior. Nothing in their work negates the idea of an innate language faculty expressed in acts of speech and subject to biological constraints. When Austin analyzes a speech act into its locutionary meaning and its illocutionary force he is dissecting an act with a biological substructure, because the language faculty that permit the act is structured in that way. Words are strung together according to biologically determined rules, and the same is true of different types of illocutionary force. Zoology took an ethological turn when scientists stopped examining rats and pigeons in the laboratory and turned their attention to animal behavior in its natural setting; ordinary language philosophy did much the same thing (at much the same time). This led to considerable theoretical enrichment in both cases as the biological perspective widened. One can imagine aliens visiting earth and making an ethological study of human linguistic behavior, combining it with organic studies of speech physiology. They would add this to their other investigations of bee and whale linguistic behavior. All of it would come under the heading of earth biology.

Of course, biologically based language activity interactswith cultural formations, as with speech acts performed within socially constructed institutions (e.g. the marriage ceremony). But the same thing is true of other biological organs—say, the hands: that doesn’t undermine the thesis that basic biological adaptations are in play. It is not being claimed that everythingabout language and its use is biologically based. But the traits of language of interest to philosophers of language tend to be of such generality that they are bound to be biological in nature. For example, the role of intention in creating speaker meaning, as described by Grice, introduces a clearly biological trait of the organism—purposive goal-directed action. We don’t have intentions as a result of divine intervention or cultural invention; intention is in the genes. Intention grows in the infant along with motor skills and doesn’t depend upon active teaching from adults. Intentions will play a role in cultural activities, but they are not themselves products of culture. The same is true of consciousness, perception, memory, and so on—all biological phenomena.

According to Chomsky, a grammar for a natural language simply is a description of the human biologically given language faculty. Following that model philosophical theories of meaning have the same status: they are attempted descriptions of a specific biological trait. Semantic properties are as much biological properties as respiration and reproduction. Philosophy of language is thus a branch of biology. The standard theories are easily construed this way. Semantics follows syntax and phonetics in making the biological turn. Fortunately, existing philosophy of language can incorporate this insight.[4]

 

Coli

[1]For an authoritative study see Eric Lenneberg, Biological Foundations of Language(1967) and the many works of Noam Chomsky. If we ask who is the Darwin of language studies, the consensus seems to be Wilhelm von Humboldt (1767-1835).

[2]The positivists may be construed as claiming that no sentence can have the trait of meaning without having the trait of verifiability. One trait is necessary for the other. This is like claiming that no organ can circulate the blood without being a pump or that no organ can be the organ of speech without expelling air. Thus a metaphysical sentence can’t be meaningful because it lacks the necessary trait of verifiability. No evolutionary process could produce a language faculty that included sentences that mean without being verifiable. Put that way, it looks like a pretty implausible doctrine—why couldn’t there be a mutation that produced meaningful sentences that exceed our powers of verification? Meaning is one thing, our powers of verification another.

[3]“Commanding, questioning, recounting, chatting, are as much a part of our natural history as walking, eating, drinking, playing.” Philosophical Investigations, section 25.

[4]It would be different if existing philosophy of language tacitly presupposed some sort of divine dispensation theory, or a brand of extreme cultural determination; but as things stand we can preserve it by recasting its questions as biological in nature. There is nothing reductionist about this, simply taxonomically correct.

Share