Buying a Language
Buying a Language
I want to learn a language, say Spanish, but I can’t be bothered to do it the old-fashioned way, so I decide to buy it (expensive but labor-saving). I go to the language store (Lang-Mart, right next to the Chinese supermarket). I am told that I can buy either a grammar chip or a dictionary chip or both. I figure I will buy the grammar chip and use an English-Spanish dictionary for the word meanings—cheaper that way. I go home and insert the new chip into my brain: by pressing a button a whole lot of Spanish words crowd my consciousness; lo and behold, strings of them form in response to the environment and the utterances of Spanish speakers. I proceed to look up these words in my dictionary. But this is slow work and it quickly becomes apparent that it is impracticable; as expected, I don’t know the meanings of Spanish words. This is no use, I think, and return to the store. I swap the grammar chip for a dictionary chip that will install knowledge of what Spanish words mean in my memory, hoping that I can figure out the grammar myself. I go home and install the new chip: miraculously it gives me complete lexical knowledge of the Spanish language. But the grammar defeats me; I don’t know how to combine the words into grammatical strings (I have never been good with grammar). I trudge back to the store and reluctantly buy the grammar chip as well (pricey!). Now I am in business: I can now form grammatical strings that I understand. A language consists of a grammar and a lexicon, so both things have to be known in order to speak the language like a native: atomic knowledge of word meaning and combinatory knowledge of grammar working together.
If a machine (call it a computer) can only do the combinatory part, it won’t know the language properly, just a component of it. If another machine (call it a mental dictionary) can only do the atomic part, it also won’t know the language properly. Thus, the existence of either type of machine will not show that a machine could speak in the normal way: symbol manipulation is not enough, and lexical information (knowledge of word meaning) is not enough. Language mastery needs both. Accordingly, there can be a Chinese room argument against computers as competent speakers, but there can also be a “Spanish room” argument against machines that have lexical data banks (word meaning memories) but no grammatical competence. We would need a machine that combines both to have a genuine speaker-hearer. We actually have machines that do the latter (albeit without real grammatical knowledge), but we don’t (yet) have machines that contain the latter (knowledge of word meaning). In order to know what “red” means, say, a machine would have to be consciously acquainted with redness: but no machine yet built is (as far as anyone knows).[1]
Would a machine that has both types of knowedge know a language like a normal human? I’m afraid not, because in addition to syntactic and semantic knowledge a competent speaker needs pragmatic knowledge. Suppose I want to learn to speak Japanese like a native: I go the language store to pick up the necessary chips only to be told that I will need a pragmatics chip as well as the other two. Unfortunately, this one a really quite expensive—$2,000 a pop! But if I don’t have it, I won’t be able to use the language in ordinary conversation, because I won’t know the rules of appropriateness and politeness that govern social language use. I could get arrested if I say the wrong thing! I fork over the dough. Thus, there could be a “Japanese room” argument to the effect that a machine could never acquire pragmatic knowledge of a human language, even it could have syntactic and semantic knowledge. You might be fed syntactic and semantic information and be fully competent in those aspects of language, but this information omits the pragmatic aspects of speech, which require knowledge of culture, human psychology, etiquette, historical context. None of this is entailed by syntactic and semantic knowledge, which is machine-like in comparison to pragmatic competence. Pragmatics can’t be reduced to syntax and semantics, as semantics can’t be reduced to syntax; so, a machine that can do only syntax and semantics will not be a fully competent speaker. A machine that can only do phonetics will likewise not add up to a speaking machine, since syntax goes beyond phonetics; just because a machine can make the sounds of English it doesn’t follow that it can speak according to English syntax. The so-called Chinese room argument is one in a series of parallel arguments directed against claims that specifiable types of machine really have language mastery in the normal sense. It says, essentially, that there is more to semantics than syntax (grammar)[2]; but we could also argue that there is more to syntax than phonetics and more to pragmatics than semantics (and more to poetics than pragmatics). All true, but one wonders why any elaborate argument was needed in the first place. Speaking a language is a complex skill consisting of several components that can be independently possessed.
[1] It is clearly not enough to have a machine containing a dictionary, since it would need to know what the defining words mean. At a minimum this requires knowing what those words refer to. Translation knowledge is not the same as semantic knowledge.
[2] The only way to resist the argument is by claiming that computers do more than manipulate symbols; they also refer to things with those symbols. They do so by being embedded in the world in such a way that reference naturally follows, as the causal theory of refence maintains. Then the question will be whether this theory is true. I myself think that computers are nowhere near mastering a language as humans do, but there is no reason of principle why a machine can’t speak like a human (just build one that exactly resembles a human down to the last detail and then rely on supervenience).

Leave a Reply
Want to join the discussion?Feel free to contribute!