Interviews

Interviews

I have been conducting a series of interviews over the last few weeks with my Turkish associate (now friend) Ugur Polat. We have done four so far, each over three hours long (using Zoom). He plans ten in total. The actual interviewing is done by his neurologist wife Burcu, because her spoken English is better (very good in fact). We have been covering my intellectual life in detail. So far, we have dealt with my years in Manchester, London, and Oxford. The questions are probing and well-informed; I get to talk about my old friends and colleagues, many now deceased. He is planning to make a book of it to be published in Turkish and English. He is also interested in writing a book about the hand, starting with Sir Charles Bell and ending with me—an excellent project. In addition, I treated them to a mini-concert of songs in which a good time was had by all.[1] I urge all rabid feminist cancellers (bless their vicious little hearts) to contact Ugur and try to prevent him and his wife from carrying out their intentions—good luck with that! It’s nice to be in contact with decent people.

[1] I performed Black is Black, You Better Move On, Everything I Own, The First Cut is the Deepest, Leave my Kitten Alone, Stand by Me, Love Hurts, and Alone.

Share

Kayak Philosophy

Kayak Philosophy

The other day I was talking to Ian Macleod, world champion kayak surfer (a South African living in the US). He is building me a new waveski (surf kayak) and we were finalizing details. I asked him how the recent world championships had gone and he replied laconically “I won”. He added that he expected to win but was pleased by the wide margin by which he had won (he gave me the scores). I said that was good to hear, thinking of my own recent declarations of rank. There followed a fairly long discussion of arrogance, fact, truthfulness, demonstrable superiority, narcissism, etc. We both protested our innocence in this regard. I applauded his adherence to reality, even when it concerned his dominance in the sport. It was a heartening meeting of minds. I’m looking forward to getting the boat.

Share

Buying a Language

Buying a Language

I want to learn a language, say Spanish, but I can’t be bothered to do it the old-fashioned way, so I decide to buy it (expensive but labor-saving). I go to the language store (Lang-Mart, right next to the Chinese supermarket). I am told that I can buy either a grammar chip or a dictionary chip or both. I figure I will buy the grammar chip and use an English-Spanish dictionary for the word meanings—cheaper that way. I go home and insert the new chip into my brain: by pressing a button a whole lot of Spanish words crowd my consciousness; lo and behold, strings of them form in response to the environment and the utterances of Spanish speakers. I proceed to look up these words in my dictionary. But this is slow work and it quickly becomes apparent that it is impracticable; as expected, I don’t know the meanings of Spanish words. This is no use, I think, and return to the store. I swap the grammar chip for a dictionary chip that will install knowledge of what Spanish words mean in my memory, hoping that I can figure out the grammar myself. I go home and install the new chip: miraculously it gives me complete lexical knowledge of the Spanish language. But the grammar defeats me; I don’t know how to combine the words into grammatical strings (I have never been good with grammar). I trudge back to the store and reluctantly buy the grammar chip as well (pricey!). Now I am in business: I can now form grammatical strings that I understand. A language consists of a grammar and a lexicon, so both things have to be known in order to speak the language like a native: atomic knowledge of word meaning and combinatory knowledge of grammar working together.

If a machine (call it a computer) can only do the combinatory part, it won’t know the language properly, just a component of it. If another machine (call it a mental dictionary) can only do the atomic part, it also won’t know the language properly. Thus, the existence of either type of machine will not show that a machine could speak in the normal way: symbol manipulation is not enough, and lexical information (knowledge of word meaning) is not enough. Language mastery needs both. Accordingly, there can be a Chinese room argument against computers as competent speakers, but there can also be a “Spanish room” argument against machines that have lexical data banks (word meaning memories) but no grammatical competence. We would need a machine that combines both to have a genuine speaker-hearer. We actually have machines that do the latter (albeit without real grammatical knowledge), but we don’t (yet) have machines that contain the latter (knowledge of word meaning). In order to know what “red” means, say, a machine would have to be consciously acquainted with redness: but no machine yet built is (as far as anyone knows).[1]

Would a machine that has both types of knowedge know a language like a normal human? I’m afraid not, because in addition to syntactic and semantic knowledge a competent speaker needs pragmatic knowledge. Suppose I want to learn to speak Japanese like a native: I go the language store to pick up the necessary chips only to be told that I will need a pragmatics chip as well as the other two. Unfortunately, this one a really quite expensive—$2,000 a pop! But if I don’t have it, I won’t be able to use the language in ordinary conversation, because I won’t know the rules of appropriateness and politeness that govern social language use. I could get arrested if I say the wrong thing! I fork over the dough. Thus, there could be a “Japanese room” argument to the effect that a machine could never acquire pragmatic knowledge of a human language, even it could have syntactic and semantic knowledge. You might be fed syntactic and semantic information and be fully competent in those aspects of language, but this information omits the pragmatic aspects of speech, which require knowledge of culture, human psychology, etiquette, historical context. None of this is entailed by syntactic and semantic knowledge, which is machine-like in comparison to pragmatic competence. Pragmatics can’t be reduced to syntax and semantics, as semantics can’t be reduced to syntax; so, a machine that can do only syntax and semantics will not be a fully competent speaker. A machine that can only do phonetics will likewise not add up to a speaking machine, since syntax goes beyond phonetics; just because a machine can make the sounds of English it doesn’t follow that it can speak according to English syntax. The so-called Chinese room argument is one in a series of parallel arguments directed against claims that specifiable types of machine really have language mastery in the normal sense. It says, essentially, that there is more to semantics than syntax (grammar)[2]; but we could also argue that there is more to syntax than phonetics and more to pragmatics than semantics (and more to poetics than pragmatics). All true, but one wonders why any elaborate argument was needed in the first place. Speaking a language is a complex skill consisting of several components that can be independently possessed.

[1] It is clearly not enough to have a machine containing a dictionary, since it would need to know what the defining words mean. At a minimum this requires knowing what those words refer to. Translation knowledge is not the same as semantic knowledge.

[2] The only way to resist the argument is by claiming that computers do more than manipulate symbols; they also refer to things with those symbols. They do so by being embedded in the world in such a way that reference naturally follows, as the causal theory of refence maintains. Then the question will be whether this theory is true. I myself think that computers are nowhere near mastering a language as humans do, but there is no reason of principle why a machine can’t speak like a human (just build one that exactly resembles a human down to the last detail and then rely on supervenience).

Share

Drummers

Drummers

In a typical four-piece band you have two guitarists, a bassist, and a drummer; the lead vocalist usually has one or two back-up singers. It isn’t easy to manage with one guitarist; you need a dedicated rhythm guitarist. There are never two bassists and very rarely two drummers. There is certainly no need for two bassists, but a case can be made for two drummers. Why? For the same reason you need two guitarists: one to provide a steady full rhythm sound (chords, continuous strumming) and one to provide a melody line on single strings for solos. Lead and rhythm guitar complement each other (rather like lead and backing vocals). So, why not have one drummer handling the steady rhythm part and another playing the fancy fills and accents? This would really fill out the percussion section, giving it greater prominence. The fact is that the solitary drummer has a lot to deal with and is being asked to perform two jobs at once: you physically can’t play a roll while also keeping up the beat on the snare; you can’t do anything imaginative while being fully occupied with the backbeat. It’s bad enough that you have to play with all four limbs in complicated percussive movements! Things would be a lot easier, and musically richer, if you could break the percussion section into two parts. I would also like to see and hear drum solo duets and complex two-person rhythms (especially on the bass drum). The solitary drummer has too much on his plate to engage in genuine artistry. Imagine a band with Ringo Starr on rhythm drums and Keith Moon on lead drums. This could transform popular music.

Share

Chinese Rooms and Linguistic Knowledge

Chinese Rooms and Linguistic Knowledge

John Searle’s Chinese room argument shows that it’s possible to be able to form meaningful sentences in a foreign language without knowing that language. That is, you can know the grammatical rules of a language without knowing the lexicon—what the individual words mean and refer to. Suppose you wanted to learn Chinese and you go to an instructor offering to teach you that language, but all he does is tell you how to put words together into meaningful strings, omitting to tell you what the individual words mean; you would not learn the language—at best you would learn its grammar. Your teacher might assign each word to a grammatical category, so that you would know which are the nouns and which the verbs, etc.; still, you would not know what the words mean. Surely, this is trivially true: knowledge of grammatical category and rules of combination do not suffice for full knowledge of the language. Nor will having consciousness help, or even conscious thought; you still don’t know what those alien sounds and shapes individually mean. If you did, you would know the language—understand it—but you don’t. Knowledge of language consists of two parts—knowledge of grammar (syntax) and knowledge of word meaning (lexicon)—and the former does not entail the latter.

Does this show that a computer doesn’t or can’t understand a language? That depends on what is true of the computer. It doesn’t if it has no knowledge of what the words it manipulates mean; but if it contains that knowledge, it does know the language. It’s the same for a conscious human being: if he lacks lexical knowledge, he lacks knowledge of the language, no matter what else is true of him. The question then becomes, “What does it take to know the meaning of words?” This is where the problems begin: for what does it take for me to know what words mean? What does it take for anyone to know anything? If we knew that, we could decide whether a computer (what kind of computer?) can know and understand a language. There are many theories of linguistic knowledge: imagistic theories, use theories, intention-based theories, causal theories, neural network theories, description theories, indeterminacy theories, community practice theories, etc. None of these commands universal assent. It’s a philosophical problem (also scientific). In order to get a machine to understand a language, you would need to install whatever conditions your favorite theory prescribes; and that will be a substantive question. Can a machine be programmed to have mental images or intentions or agency or “forms of life”? The Chinese room argument doesn’t say; all it says is that grammar is not enough. And it may be that a computer can’t know or understand grammar either—it merely simulates such knowledge. Adding consciousness will make no difference, since many animals are conscious without being able to understand a human language. The Chinese room argument is silent on such questions, and irrelevant to them. What it tells us is true enough, but it doesn’t settle the question of machine understanding. I would say that we don’t know what gives us lexical knowledge, as we don’t know what gives us consciousness (and other mental capacities), so we don’t know whether a machine (whatever that is) could have knowledge of language. It may or may not be the same as what makes a biological organism linguistically competent—we just don’t know. To put it more polemically, the Chinese room argument is toothless; it simply shows that grammatical rules don’t entail lexical information—which is trivially true. I suppose we could say that it shows that our current computers don’t understand language because all they can do is manipulate symbols whose meaning they don’t know. This might well be said of some conscious and intelligent biological organism that happens to be good at grammar but bad at words; all it can do is string words together grammatically without having any knowledge of what those words mean (some sort of idiot savant might be like that). I don’t think any computing machine we have invented so far really knows a language as humans do, but that doesn’t rule out future machines that do better. At a minimum they would need conscious perception so as to ground object-directed thought, but we know so little about perception and cognition that we can’t say whether we will ever build such machines. All Searle’s argument shows is that (mechanical) syntactic engines fall short of semantic understanding, but that was surely obvious all along and doesn’t require the creation of an elaborate thought experiment. The argument might be re-labeled the Syntactic Impotence argument. This says that you can arrange words into patterns (like jigsaws) without knowing what they mean. Yep, so what?[1]

[1] It might be said of computational enthusiasts that they can’t see this point, obvious as it is, so it’s worth arguing with them. Maybe, but the argument is very limited on the substantive question at issue, viz. whether a machine could think and understand language. My own view is that we are machines, so the answer is yes. As to knowledge of word meaning, I am a mysterian (like Chomsky).

Share

A List

A List

I will make a list so that you can see that I’m not exaggerating.

Intellectual. Philosophy: all areas, technical and popular. Science: psychology, biology, economics, physics. Literature: Shakespeare, Jane Austen, Nabokov, etc. Writing: novels, short stories, poetry, songs (rock, ballads, blues, pop).

Sports. Tennis, table tennis, squash, badminton, gymnastics, pole vault, discus, basketball, football, cricket, archery, knife throwing, swimming, kayaking, surfing, windsurfing, kiteboarding, skim boarding, paddleboarding, skiing, ice skating, bowling, darts, skateboarding, mountain biking, motorcycling, trampoline, body building.

Music. Drums (rock, jazz, djembe), guitar, harmonica, voice (rock, ballads, blues, pop).

Share

By a Long Chalk

By a Long Chalk

The phrase originates in keeping scores by making chalk marks. I remember Professor John Cohen, head of the psychology department at Manchester University when I was a student there (1968-72), writing to me and saying that my M.A. thesis on innate ideas was “the best M.A. thesis I have ever read by far and by a long chalk”. He wasn’t content with “the best” or even “the best by far”; no, it had to be “the best by far and by a long chalk”. A bit of a redundancy you might say, but one sees his intention—as we might paraphrase him, “far and away the best”. This made me reflect on my own self-description, and I fear I may have to revise it yet again as new facts come to light: not just the best philosopher by far, but the best philosopher by far and by a long chalk. It is my melancholy duty to put it on record that I merit even the chalky superlative. But it gets worse: I have been forced to accept that it doesn’t stop there. For who else can claim the same range as me across the board? The intellectual, the athletic, the musical—it’s a highly unusual package. I have never heard of anyone who can do as many sports as me, and my musical range is unusually broad. It’s hard to believe. I seem to be unique in my range of abilities. If anyone else is in the same case, I would like to hear about them. Someone should really make a documentary on me before it is too late—I am a remarkable specimen. I find it hard to credit it myself. Psychologists should investigate and probe me; I might contain some useful lessons. I am a kind of freak of nature, a weirdo, a strange mutation. They should take a look at my DNA. How did it happen? After all, I don’t look like much. I’m a bit of mystery.

Share

Epistemic Necessity and the Good

Epistemic Necessity and the Good

The connection between metaphysical necessity and the Good is speculative and questionable, though there are signs of affinity.[1] But the connection between epistemic necessity and the Good is immediate and easy to discern: it goes via certainty. If a given proposition is epistemically necessary, it is certain: I am certain that I exist and that I think. To be epistemically necessary is to be certain. By contrast, epistemic contingency is the same as doubtfulness: you might turn out to be wrong. It is the difference between confidence and diffidence, being sure and being unsure. Certainty is good and uncertainty is not so good (sometimes quite bad). We would all like to be (justifiably) certain of everything, if only we could; but we live with uncertainty, however uncomfortably, there being no alternative. Uncertainty is a necessary evil. We therefore seek certainty and try to avoid uncertainty: we prize epistemic necessity. We like the idea that we can’t be wrong—that our reasons necessitate our conclusions. We value the necessity of logical entailment. This is what gives rise to epistemic necessity. If nothing ever entailed anything, there would be no such thing as certainty. Logic is a good thing.

But why is certainty good? Two ideas come to mind: certainty is instrumentally good in relation to desire; and certainty feels good in itself. If we can’t be wrong, we can’t be wrong about what will satisfy our desires; in the ordinary sense, I am certain the supermarket is open on a Sunday, so I go there to pick up food—and I am never disappointed. But if I am rationally doubtful, I might end up hungry. This is the basis of epistemological pragmatism—pragmatism about the value of certainty. Alternatively, it might be maintained that it feels good to be certain and not good to be uncertain—the former is more pleasurable. I think this is true and not a negligible point, but I don’t think it is the end of the story. It isn’t simply a matter of hedonism—the higher pleasures and all that. Something more interesting is going on, which gets to the heart of epistemology. We value certainty because we value knowledge, and knowledge requires certainty. What kind of certainty is a debatable point—philosophical certainty or commonsense certainty. In the ordinary sense, we have a good deal of certainty, as in the supermarket example, so we have a good deal of knowledge. That’s good, because knowledge is good. Epistemic necessity gives us knowledge, and we happily accept the gift. Doubt undermines knowledge, and the more doubtful the more the undermining. But why is knowledge good? That is a very good question. Is knowledge to be defined as something like true justified belief or is it more like direct perception of a fact? The former leads naturally to a pragmatic conception of the value of knowledge—knowledge is what gets you fed (so banal, so boring!). The latter opens up a more interesting proposal: knowledge is good because it connects the self to the world. We value knowledge because it is direct apprehension of reality—and we value this in its own right not because of pragmatic consequences. We value knowledge for the same kind of reason we value friendship or romantic love—because it relieves existential isolation. So, we value epistemic necessity because we value certainty, and we value certainty because we value knowledge, and we value knowledge because we value connection. We like to merge (“Only connect!”). And we do merge: perceptual consciousness presents the objective world to us as separate but also possessed. According to the Cogito, I am connected in this way to myself—not remotely but immediately. It would be terrible to be disconnected from myself, as if I were a mere hypothesis (like black holes or dark matter or fairies). It would also be good to be at one with my environment—and I am by virtue of seeing it and touching it. I am not cut off from it, blind to it, ignorant of it: I know it, as I know myself. That is, I enjoy acquaintance with my environment, as I enjoy acquaintance with other people and animals. I am not stuck inside my own inner world, just a passing show of inchoate sensation—that would be a kind of hell. This kind of perceptual acquaintance is part of what makes life valuable, worth living. Luckily, we have quite a bit of it (pace the philosophical skeptic). We are not blindly guessing from a distance, hoping against hope that there is something out there. It’s good to be in the know, well acquainted, in with the in crowd, part of life’s rich pageant—not a stay-at-home, a shut-in, a reluctant lonesome cowboy. Knowing is part of human existence, perhaps the main part. But not that etiolated sort of knowing beloved of the analytical philosopher (true-justified-belief knowing) but embodied-direct-perception-of-actual-concrete-facts knowing, as in seeing the sun rise in the morning.[2] We have epistemic connection to thank for that. It is something quite profoundly meaningful. I am at one with the world not a self-enclosed particle. Perceptual knowledge has human value.

Does all this have any bearing on metaphysical necessity? None at all, you might reply, these being quite different concepts. You are not wrong there, to be sure, but is there no connection at all between the two concepts? After all, the word “necessary” is used for both (which has caused a lot of confusion about the distinction); and the epistemic use might spill over into the metaphysical use by association. But more forcefully, epistemic necessity is defined by metaphysical necessity (but not vice versa): for it involves the idea of a set of reasons entailing a conclusion, and entailment is a metaphysically modal notion, i.e., the reasons necessitate the conclusion. In all possible worlds in which the reasons obtain the conclusion obtains.[3] There is such a thing as epistemic necessity only because there is such a thing as metaphysical necessity. We therefore can’t value the former without valuing the latter. There is only knowledge because metaphysical necessity exists, i.e., premises entail conclusions. Certainty requires that the grounds necessitate the inference, as in the Cogito. To put it differently, there has to be a type of non-epistemic necessity for there to be epistemic necessity (even if it doesn’t add up to what the logicians regard as necessity). You therefore can’t value the one without valuing the other. The case is similar to nomological necessity: there can’t be epistemic necessity (certainty) without nomological necessity, since we rely on laws of nature in all our reasonings, so we can’t value epistemic necessity without valuing nomological necessity. It’s good to be certain that the supermarket is open on Sunday, but this requires that there are laws of nature concerning shop openings, worker availability, motion, etc. Nomological necessity is good because it enables epistemic necessity (inter alia), which is itself good. We are glad there are laws of nature otherwise we couldn’t be sure of anything, and we like to be sure. We like to be sure because we like to know. And we like to know because otherwise it would be a pretty dismal isolated existence. Plus, it feels nice to know things and it gets you fed. There are reasons why necessity is regarded as good.[4]

[1] See my papers “Is Necessity Good?” and “Necessity and Change”.

[2] See my “Perceptual Knowledge” and “Non-Perceptual Knowledge”.

[3] See Kripke’s discussion of epistemic necessity in Naming and Necessity.

[4] It is also true that I know what I know necessarily in central cases. For example, if I am seeing a certain color and I know that color “by acquaintance”, I necessarily know it—I can’t help knowing it (compare pain). Necessarily, if I am acquainted with X, I know X. Likewise, I necessarily know that I exist, by the Cogito: I can’t not know it. This is another way in which metaphysical necessity is connected to epistemic necessity. In general, if I am certain that p, I must know that p. If I’m certain that I’m thinking, I can’t not know that I’m thinking—the knowledge is forced on me. There are many metaphysical necessities of epistemic truths, i.e., things I can’t help knowing.

Share