Chinese Rooms and Linguistic Knowledge

Chinese Rooms and Linguistic Knowledge

John Searle’s Chinese room argument shows that it’s possible to be able to form meaningful sentences in a foreign language without knowing that language. That is, you can know the grammatical rules of a language without knowing the lexicon—what the individual words mean and refer to. Suppose you wanted to learn Chinese and you go to an instructor offering to teach you that language, but all he does is tell you how to put words together into meaningful strings, omitting to tell you what the individual words mean; you would not learn the language—at best you would learn its grammar. Your teacher might assign each word to a grammatical category, so that you would know which are the nouns and which the verbs, etc.; still, you would not know what the words mean. Surely, this is trivially true: knowledge of grammatical category and rules of combination do not suffice for full knowledge of the language. Nor will having consciousness help, or even conscious thought; you still don’t know what those alien sounds and shapes individually mean. If you did, you would know the language—understand it—but you don’t. Knowledge of language consists of two parts—knowledge of grammar (syntax) and knowledge of word meaning (lexicon)—and the former does not entail the latter.

Does this show that a computer doesn’t or can’t understand a language? That depends on what is true of the computer. It doesn’t if it has no knowledge of what the words it manipulates mean; but if it contains that knowledge, it does know the language. It’s the same for a conscious human being: if he lacks lexical knowledge, he lacks knowledge of the language, no matter what else is true of him. The question then becomes, “What does it take to know the meaning of words?” This is where the problems begin: for what does it take for me to know what words mean? What does it take for anyone to know anything? If we knew that, we could decide whether a computer (what kind of computer?) can know and understand a language. There are many theories of linguistic knowledge: imagistic theories, use theories, intention-based theories, causal theories, neural network theories, description theories, indeterminacy theories, community practice theories, etc. None of these commands universal assent. It’s a philosophical problem (also scientific). In order to get a machine to understand a language, you would need to install whatever conditions your favorite theory prescribes; and that will be a substantive question. Can a machine be programmed to have mental images or intentions or agency or “forms of life”? The Chinese room argument doesn’t say; all it says is that grammar is not enough. And it may be that a computer can’t know or understand grammar either—it merely simulates such knowledge. Adding consciousness will make no difference, since many animals are conscious without being able to understand a human language. The Chinese room argument is silent on such questions, and irrelevant to them. What it tells us is true enough, but it doesn’t settle the question of machine understanding. I would say that we don’t know what gives us lexical knowledge, as we don’t know what gives us consciousness (and other mental capacities), so we don’t know whether a machine (whatever that is) could have knowledge of language. It may or may not be the same as what makes a biological organism linguistically competent—we just don’t know. To put it more polemically, the Chinese room argument is toothless; it simply shows that grammatical rules don’t entail lexical information—which is trivially true. I suppose we could say that it shows that our current computers don’t understand language because all they can do is manipulate symbols whose meaning they don’t know. This might well be said of some conscious and intelligent biological organism that happens to be good at grammar but bad at words; all it can do is string words together grammatically without having any knowledge of what those words mean (some sort of idiot savant might be like that). I don’t think any computing machine we have invented so far really knows a language as humans do, but that doesn’t rule out future machines that do better. At a minimum they would need conscious perception so as to ground object-directed thought, but we know so little about perception and cognition that we can’t say whether we will ever build such machines. All Searle’s argument shows is that (mechanical) syntactic engines fall short of semantic understanding, but that was surely obvious all along and doesn’t require the creation of an elaborate thought experiment. The argument might be re-labeled the Syntactic Impotence argument. This says that you can arrange words into patterns (like jigsaws) without knowing what they mean. Yep, so what?[1]

[1] It might be said of computational enthusiasts that they can’t see this point, obvious as it is, so it’s worth arguing with them. Maybe, but the argument is very limited on the substantive question at issue, viz. whether a machine could think and understand language. My own view is that we are machines, so the answer is yes. As to knowledge of word meaning, I am a mysterian (like Chomsky).

Share
12 replies
  1. Étienne Berrier
    Étienne Berrier says:

    Don’t you think that the meaning of words (say nouns, table, stone…) is closely linked with the intentionnal relation we have with the objects designed by these nouns?

    Reply
  2. The Nietzschean
    The Nietzschean says:

    But isn’t the Chinese Room argument aimed to show that not only is knowledge of grammar and knowledge of grammatical categories and the ability to assigns symbols to the proper categories insufficient for language understanding, but that even if you add the capacity to engage in what appears from the outside to be meaningful conversation, you still don’t have it? Indeed, even if you add the ability to act in the world (I am thinking of the robot reply in the original essay)?

    Reply
    • Colin McGinn
      Colin McGinn says:

      That is true, but I took that as given–the argumentative work is done by the syntax-semantics point. You can certainly have what looks like a conversation and just be shuffling symbols according to syntactic rules and not know what these symbols mean. In principle, you could be a semantic zombie and a syntactic whizkid. My computer can answer my questions but it doesn’t know what its answers mean.

      Reply
      • The Nietzschean
        The Nietzschean says:

        I guess what I am trying to say is that I don’t think the argumentative work is done, or is solely done, by the syntax-semantics point. The point is not that merely knowing syntax is not knowing a language, but that also conversing intelligibly and intelligently – which goes significantly beyond syntax – is not enough. The computer is more than a syntactic whizkid, and to answer questions requires more than mere syntax. I think this saves the argument from utter triviality. Sorry for belaboring the point.

        Reply
        • Colin McGinn
          Colin McGinn says:

          No, it’s not that, as Searle himself makes clear. A syntactic computer could pass the Turing test but not have any understanding of language or any semantics, so far as the Chinese room argument is concerned. That’s why he never just asserted that computers can’t respond appropriately to questions. Remember that there can be very simple speakers who don’t rise above sophisticated machines in their verbal behavior.

          Reply
  3. Étienne Berrier
    Étienne Berrier says:

    Thank you for your linguistic indulgence.
    About My question : do you find it evident or absurd?
    (May be it is not understandable)

    Reply
    • Colin McGinn
      Colin McGinn says:

      I don’t understand why you asked the question; it is surely obvious that meaning and intentionality are closely connected. But what does this have to do with Searle’s argument?

      Reply
      • Étienne Berrier
        Étienne Berrier says:

        What I mean: if meaning depends on intentionnalilty, it is vain to try to build a real speaking machine by imitation of our language in a computationnal way (as IA does) we should build a machine able to have intentionnal relations with the objects of the world. And as we don’t realy know What intentionnaliity is, it is certainly not for tomorow.

        Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.