Puzzling Performatives

                                                Puzzling Performatives

 

 

J.L. Austin insisted that utterances of performative sentences are neither true nor false. If I say, “I promise to dine with you” my utterance has no truth-value. Presumably this implies that it expresses no proposition (though it is clearly meaningful), since if it did it would have to be either true or false. It is not, as Austin puts it, a constative. Performative sentences thus belong with interrogatives and imperatives, despite their declarative grammar. Some people have contended against Austin’s position, claiming that such utterances do have truth-value, being generally true. They say that the utterance is true precisely in case the speaker is making a promise: if I say, “I promise to dine with you” this utterance will be true if and only if I (thereby) promise to dine with you. After all, “You promised to dine with me” will be truly uttered by you in the circumstance that I made the utterance in question. Thus, it is contended, performatives are constatives and do express truth-evaluable propositions; no special category needs to be created for them. Who is right?

            There seems to be something correct in both positions. Let us assemble more data, which Austin would have approved. If performatives can be true we ought to be able to prefix them with “It’s true that”: so can I say, “It’s true that I promise to dine with you”? That sounds distinctly odd and not equivalent to the embedded performative. I don’t promise to dine with you by uttering such a sentence. Thus we have a breakdown of the usual equivalence of “p” and “It’s true that p”. I doubt that anyone has ever uttered such a sentence with the intention of making a promise or for any other reason. Compare “It’s true that I name this ship Bertha” and “It’s true that I hereby make you man and wife”. These may not be nonsense but they are close to it. They violate some sort of linguistic rule. But there are sentences in the vicinity that suffer no such defect and which cloud the issue. Thus there is nothing wrong with the following: “It’s true that I promised to dine with you”, “It’s true that I will promise to dine with you”, “It’s true that I ought to promise to dine with you”, and “It’s true that in saying ‘I promise to dine with you’ I thereby promise to dine with you”. We might even tolerate “It’s true that I am promising to dine with you”. And of course there is nothing amiss with “It’s true that you promised to dine with me” or even “It’s true that you are promising to dine with me” (uttered while I am mid speech act). The only one of these sentences that raises hackles is the present tense performative case: this is the one that I cannot prefix with “It’s true that”. Thus it can be true that I promised but I can’t build this locution into my promising explicitly—I can’t say, “It’s true that I promise to dine with you”. It is as if the truth cannot be said but only shown. This is very different from the case of ordinary assertion where I can happily add the truth prefix. It’s puzzling.

            Austin focused on the question of truth and performatives, but actually the issue arises more broadly. Consider “I know that I promise to dine with you”: this too has an odd ring, in contrast to “I know that I promised to dine with you”, along with the future tense and deontic variants (“I know that I ought to promise to dine with you”). The performative stands out as uniquely resistant to the epistemic prefix. And yet isn’t it true that as I make a promise I know I am promising? You can certainly say, “He knew he was making a promise” or even “He knows he is making a promise”, but I can’t say, “I know I promise”; and similarly for performatives of naming and marrying. So it is not just “true” that interacts oddly with performatives; “know” does too. Austin might respond to this by saying that performative utterances are neither known nor unknown (by the speaker)—they are not candidates for knowledge. Others may retort that many sentences containing the relevant verbs that do admit of “know” (e.g. “He knows full well that he promised to dine with me”). It is only the performative use of the verb that rejects the epistemic prefix.

What about other types of embedding? Consider negation: “It’s not the case that I promise to dine with you”. Again, this is a very odd sentence—is it some kind of negative performative used to decline to make a promise? You might say to me, “Promise to dine with me” and I might reply, “I won’t promise to dine with you”, but I won’t reply, “It’s not the case that I promise to dine with you”. What does that even mean? It doesn’t mean the same as “I promise not to dine with you” which has the look of a regular performative—I have made a promise by uttering it. And yet I can obviously fail to make promises. Nor is there is anything amiss with “It’s not the case that I ought to promise to dine with you”. Again, it is the performative case alone that declines negation. Imagine if instead of saying “Thank you for carrying my bag” I say, “I don’t thank you for carrying my bag”. Is this an attempt at a negative performative or just an inept way of saying “I’m not grateful for your carrying my bag”?

            It doesn’t end there, for consider: “Necessarily I promise to dine with you”. A linguistic monster indeed—what could it possibly mean? We can insert necessity all over the place with these verbs, but not there: I can say “Necessarily I will promise to dine with you” to express my belief in fatalism, or “Necessarily I ought to promise to dine with you” to express my deep moral convictions; but necessitating the performative itself would be a bizarre move in the language game. The same is true for “Possibly” or “It is contingent that”: we can’t put these in front of the performative either, but we can for other uses of the same verb. This is all grist to Austin’s mill because it confirms his doctrine that performative utterances are not statements at all but performances. If they were statements they could be true, could be known, could be negated, and could be necessitated; but instead they are acts performed by uttering words—acts of promising, naming, marrying, and thanking (and not acts of stating or asserting). If I promise to dine with you, I have performed an act like shaking your hand, and such acts are not true or false. If I could promise by some method other than saying  “I promise”, then there would be no temptation to suppose that promising is a kind of stating, since it need not be linguistic at all. Promising, greeting, thanking, marrying, and so on are not inherently linguistic acts—they could in principle be performed non-linguistically. We can talk about these acts and thereby speak truly or falsely, but the acts themselves aren’t true or false—though, as Austin reminds us, they can be performed more or less felicitously.

            So is it just wrong to suggest that performatives have truth-value? True, I can’t sensibly say, “It’s true that I promise to dine with you”, but does it follow that my speech act can’t be assigned a truth-value? When a person fails to name a ship by performing the ceremony, because he lacks the authority to name ships, isn’t it false that he named a ship? Can’t we say that his utterance “I name this ship Bertha” expressed a falsehood, since he failed to name the ship Bertha? That sounds reasonable enough, inescapable even, but we can’t convert this into permission to prefix performatives with the truth operator. So there is still something odd about performatives: even if they can be assigned truth-value, they differ from ordinary statements or constatives in that we can’t bring them within the scope of “it’s true that”. In fact, they also differ from non-indicative sentences in that these sentences really can’t be assigned truth-value (they don’t even look like statements). We can’t say, “It’s true that shut the door”, but we also can’t assign the truth-value True to “Shut the door” (when it has been shut). Imperatives shun truth altogether, while performatives tolerate it within limits. So performatives really do belong in a linguistic class of their own–puzzlingly so. Constatives are true or false and accept the truth operator; imperatives are not true or false and resist the truth operator; but performatives can be true or false while rejecting the truth operator (and other operators). A performative utterance is a statement-like speech act without being a genuine statement, so it has an ambivalent relationship to the concept of truth. What Austin really discovered is that the dichotomy between statements and non-statements is too simple: for some utterances are a bit like statements and a bit not like them. We shouldn’t operate with a dualism of the declarative and the non-declarative speech act, because performatives are genuine hybrids; they are neither one thing nor the other. They are a special class of sentences, but with affinities to other classes of sentence. Austin was basically right in his dispute with the levelers, but he exaggerated the distinctness of the performative utterance. Ironically, he was too wedded to a dichotomy in types of speech act. We need a trichotomy: declarative, non-declarative, and performative.

 

Colin McGinn

           

 

Share

Believing Zombies

                                               

Believing Zombies

 

 

Could there be zombies that believe they are conscious?  [1] They have no consciousness, but they erroneously believe that they do. That may seem possible if we think of their beliefs as implanted at birth or something of the sort: couldn’t a super scientist simply interfere with their brain to install the belief that they are conscious, as innate beliefs are installed by the genes? The belief is false, but that is no obstacle to belief possession. We may have an innate belief that we are surrounded by a world of external physical objects, but that belief might be false if we are really brains in vats. Similarly, zombies might have false beliefs about their mental world, supposing it much fuller than it really is.

            But the matter is not so simple: for beliefs need reasons. What reason could the zombies have for believing they are conscious? The reason we believe we are conscious is that we are conscious and this fact is evident to us–without that we would not have the belief in question. If the believing zombies were to reflect on the beliefs they find implanted in them, they would wonder what grounds those beliefs—what evidence there is for them. Finding nothing they would abandon their groundless beliefs, perhaps with a shake of the head at being so irrationally committed to something for which they have absolutely no reason. Minimal rationality would quickly disabuse them of their error; they would believe instead that they are not conscious, or possibly remain agnostic.

            It might be replied that consciousness is not necessary to ground belief in consciousness, only the appearance of consciousness is. The zombies have to be in an epistemic state just like our epistemic state except that we have consciousness and they have none—the appearance of consciousness without the reality. But this is contradictory, since the appearance of consciousness would have to be a form of consciousness: it would have to seem to them that they were conscious. For instance, it would have to seem to them that they have a conscious visual experience of yellow without having any conscious visual experience (of yellow or anything else). Surely that is impossible: seeming to have a conscious state is having a conscious state (of seeming). So the only reason they could have for believing they are conscious is that they are conscious, and they need a reason for that belief if they are to have it stably.

            Now it may be said that we are being too rationalistic about belief: people can believe things for no reason at all, without any evidence whatever. Couldn’t our zombies believe they are conscious because this is what they have always been taught or because of superstition or from wishful thinking? They want badly to believe they are conscious (it seems so undignified to be a mere zombie) and so they deceive themselves into believing it. Happens all the time: no evidence at all, but firm belief nonetheless. That sounds like a logical possibility, though it would be an odd case of irrational dogma or motivated self-deception. One problem is that irrational believers generally think they have reasons for belief, even though these putative reasons look hollow and unconvincing to everyone else. They will cite these reasons when challenged to defend their beliefs. But what will the zombies say when challenged? They can’t point to anything that even appears to look like consciousness, since that would imply that they have consciousness. People whose religion requires them to believe in miracles will cite certain natural events as proof of said miracles, however unconvincing these events may be as evidence of miracles; but our zombies have absolutely nothing to point to, since the mere semblance of consciousness is a case of consciousness. Their religion may require them to believe they are conscious, but they can point to nothing that could even be interpreted as consciousness, because they have no consciousness. An appearance of miracle may fail to be a miracle, but an appearance of consciousness is always consciousness. And nothing else could provide any halfway reasonable grounds for their belief. So we are left with the idea that they believe they are conscious without even believing they have any grounds for that belief.  [2] This gets us back to the case of beliefs that exist without even having any purported justification. All they can say when challenged is, “I simply believe it”. This is a difficult thing to make sense of because beliefs need grounds of some sort (they purport to be knowledge after all).

            We should conclude that zombies that believe they are conscious are not possible. Any being that believes it is conscious must be conscious. That includes us: if we believe we are conscious, then we must be conscious. This refutes an eliminative view of experiential consciousness: it cannot be that we lack such consciousness while simultaneously believing that we have it. We cannot be actual zombies under the illusion that we possess consciousness.  [3]

 

  [1] These are zombies with respect to experiential consciousness not zombies tout court, since they are stipulated to have beliefs. The intuitive idea is that they have no conscious experience and yet they believe that they do: for example, they think they have conscious visual experiences of colors, but they don’t have any such experiences.

  [2] They may have a sacred text in which it is written that zombies are conscious, despite the introspective appearances, and they may be brainwashed into accepting that text. But then the “belief” they have is really a matter of faith, since they have no direct grounds for the belief, even of the thinnest kind. They accept the text only because of their religion, not because they can offer any justification for the beliefs it recommends. They don’t really believe they are conscious, as they (rightly) believe themselves to be embodied believers. For that they need some sort evidence, even if it falls far short of what it is evidence for.

  [3] Some extremists have sought to deny that “visual qualia” (etc) exist, despite our firm conviction that they do exist. But it is simply not possible to believe in such things without there being such things, since they provide the only possible grounds for such a belief.

Share

Knowledge of Consciousness

 

 

Knowledge of Consciousness

 

 

How does our knowledge of our own consciousness differ from our knowledge of other things? Presumably it does differ: there is something unique about the way I know my own conscious states. There are many types of conscious state (event, process) and many types of knowledge of conscious states (knowledge-that, knowledge-of, knowledge-what, memory knowledge), but all such knowledge is united in being knowledge of consciousness. There is something distinctive about this knowledge: for example, I know my present visual experience of dappled sunlight in a special way. But what is that way?  I won’t be able to answer this question (that is part of my point), so my remarks will be limited to locating a problem.

            A traditional answer is that I am certain of facts about consciousness, whereas I am not certain of facts about the external world. I infallibly know my own consciousness. That is not wrong, but it doesn’t answer our question, because there are non-conscious facts about which I can also be certain, e.g. elementary logic and arithmetic. I don’t know these facts in the way I know my consciousness—they are clearly not facts of consciousness. The same problem applies to appealing to the concept of the a priori: even if knowledge of consciousness is a priori, that is not unique to such knowledge, but applies more broadly. Similarly, the concept of acquaintance won’t help: maybe I am acquainted with my own consciousness, but I am acquainted with more than that—with shapes and colors, as well as (according to Russell) universals. The same goes for the concept of transparency: we can grant that consciousness is transparent to its subject, but it is not the only thing transparent to the subject—what about basic geometry and logical concepts? My consciousness is evident to me, but it’s not the only thing that is. And these notions have nothing specifically about consciousness built into them: they are more general than that, not geared to the peculiarities of consciousness. We need to know what it is about consciousness as such that makes knowledge of it special. Why is this type of knowledge like no other, in a class of its own? We might even ask why it is nothing like other types of knowledge, being entirely and spectacularly unique. Surely that has been the feeling about knowledge of consciousness, but the usual epistemic concepts fail to do justice to its uniqueness. So again, what is it that sets knowledge of consciousness so apart from other types of knowledge?

            Here is another answer, by no means unfamiliar: it is the only type of knowledge that exhibits an asymmetry between one epistemic subject and another. Not only is it true that I know my consciousness with certainty (by acquaintance, a priori, transparently), it is also true that you cannot know it this way, and perhaps cannot know it at all. This is the old distinction between first-person and third-person access to consciousness: the dramatic asymmetry of knowledge as we move from one subject to another. No such asymmetry applies to knowledge of logic, arithmetic, geometry, universals, and whatnot. Moreover, it is in the nature of consciousness to exhibit this epistemic asymmetry—part of its essence. So isn’t this what makes knowledge of consciousness special? It is certainly the kind of answer we seek, because of its specificity–consciousness is uniquely such as to be open to one subject and closed to every other subject—but it won’t do as it stands. For we need to know in virtue of whatconsciousness exhibits the asymmetry—what explains it. Why is it so closed in one direction and yet open in another direction? Also, the theory is essentially negative in form: consciousness is not accessible to others in the way it is to oneself. But we want to know the nature of the knowledge we have of consciousness in its first-person openness—how exactly do I know my experience of dappled sunlight in a way that others can’t? What relation do I as an epistemic subject have to my own conscious states? Don’t say the certainty relation (or the acquaintance relation or the transparency relation)—that just brings us back to where we were. We need a more positive characterization of first-person knowledge of consciousness. Something in my mind (my faculty of knowledge) hooks up to something else in my mind (my consciousness) in such a way as to produce knowledge of consciousness, but what is this “hooking up”?

One has the sense, perhaps, of one thing leaping into the arms of another as nothing else can—that there is a snugness of fit here that is unique in the world. It is as if consciousness is made to be known, that this is its destiny, that it could be no other way—hence the impossibility of skepticism regarding our knowledge of consciousness. By contrast, the rest of reality is known only by means of epistemic exertion or contortion—that such knowledge requires effort and may fail (hence the real possibility of skepticism). Even if I know elementary arithmetic and geometry with certainty, that knowledge does not come to me without effort and risk—it is not given. It is acquired, secured, gained. But consciousness simply decants itself into my knowledge faculty, freely and unstintingly, with no obstacles or qualifications. It says, “Here I am, take me!” No other object of knowledge surrenders itself with such abandon (and these romantic metaphors are suggestive): everything else is coy, cagey, and reluctant by comparison. Consciousness offers itself without hesitation, on a platter, but even simple arithmetic exacts some epistemic cost—you have to think about it. Yes, I am certain that 2 + 2 = 4, as I am certain that I have a sensation of dappled sunlight, but in the former case ratiocination is required (or at least an act of insight or intuition), whereas in the latter case my certainty stems from something presented to me and not requiring anything of me. I simply know without effort or question that I have the sensation: there is no striving to find out, no mental concentration, no slight unease that I might be wrong. It is knowledge without anxiety or stress or expenditure of energy. The knowledge simply comes with the fact known, instead of calling on reserves of epistemic capital–some intellectual contribution, however minimal (it isn’t hard to know that 2 + 2 = 4). By contrast, third-person knowledge of consciousness requires real cognitive effort and is fraught with anxiety and risk: it requires diligence and determination. It is work to know another’s consciousness, maybe futile work, but for the one whose consciousness it is the job can be done lying down. It is not a job at all, not a task or project, but simply part of being conscious. The knowledge simply happens without your having to lift a finger: consciousness automatically updates you about itself free of cognitive charge. You don’t even have to do as much as cock an ear or slant an eye. 

            I hope these rhetorical flights carry some resonance, but they hardly constitute a theory. They may capture some of the phenomenology of knowing one’s own consciousness, but they don’t tell us what this unique epistemic relation consists in. Consciousness and knowledge come together somehow, with all the ease and naturalness I have described, but we still don’t know how—by what process or mechanism or miracle (and one can feel the temptation to go that way). There are metaphors to play with (an activity not to be despised), but no clear theoretical conception attends their use. So we really don’t know what makes knowledge of consciousness special, or how it works, or what it is, or what makes it possible. It is a familiar fact of conscious life, and something additional to mere ground-level consciousness, but it resists analysis or elucidation. I know that I know my consciousness in a special way, but I don’t know what that way is. I can’t get my mind around it. All I can say is that consciousness and knowledge are made for each other.  [1]

 

  [1] Other things are not made to be known—sometimes they seem made not to be known. Much of the world systematically eludes knowledge, or at least challenges it. The microstructure of matter is not made for knowledge. Knowledge is generally an achievement, sometimes against all odds, but knowledge of consciousness is a gift, a freebee, a no-brainer—it requires no intelligence and no effort. There are no examinations in consciousness knowledge (everyone would get an A).

Share

Performatives and Self-Reference

 

 

 

Performatives and Self-Reference

 

 

By uttering the words “I promise” a speaker can promise; he or she promises in virtue of uttering words. So we might expect performative utterances to allude to words as well as use them. Normally they do not take this form, containing no quotation or demonstrative reference to words. I say, “I promise to meet with you” and my utterance appears devoid of reference to words: all use, no mention. Yet we have the construction “hereby”, as in “I hereby promise to meet with you”. This seems to carry self-reference: I am saying that my promising is by means of my utterance. Others can report, “You promised to meet with me by saying ‘I promise to meet with you’”, and here the reference to words is evident. So I should be able to make things explicit in the same way, and the “hereby” construction suggests that I am incipiently doing just that. Can I also make the self-reference explicit?

            Surprisingly, it is not easy to do that, and it never happens in actual speech. Suppose I say, “By uttering these words I promise to meet with you”: this is not equivalent to the original performative and is obscure in sense. Which words—all of them or some? Let’s try this instead: “By uttering the words ‘I promise to meet with you’ I promise to meet with you”. This is even worse: it is not even clear that such a sentence can be used to make a promise. At best it might be taken to mean that uttering the words “I promise” is making a promise—which is not performative. Applying this kind of paraphrase to the performative sentence robs it of its performative power and turns it into a maladroit clunker. But how else could we make the self-reference explicit? If there is self-reference here, it is not like “This sentence is false” or “’Snow is white’ contains three words”. The performative with “hereby” in it works perfectly well, but if we try to unpack it by means of standard devices of self-reference we produce monsters. We seem to have in performatives an unusual kind of self-reference: the utterance alludes to itself indirectly, but it declines to expand into explicit reference to itself. It is a kind of coy or oblique self-reference. It doesn’t fit the standard examples of self-reference by means of quotation or demonstrative reference.

            If I say, “I promise to meet with you by uttering ‘I promise to meet with you’”, that sentence appears to mean only that my meeting with you will be expedited by my utterance of those words. We can’t convert the implicit self-reference of the original performative into an explicit paraphrase involving straightforward quotation. The self-reference is essentially implicit—yet another oddity of performative sentences. It is the same with other performative verbs such as “baptize”: “I hereby baptize you Mary” is fine, but “I baptize you Mary by saying ‘I baptize you Mary” is not fine. We nullify the act by referring to it; yet the act must be performed for the sentence to be true. If I say, “By this act I promise to meet with you”, I bring reference to the act of promising into my utterance, but then the utterance fails to carry its intended performative force. If I try to refer to the act of promising in my promise, I undercut my promise; yet that act must exist in order that I should promise. The performative is a peculiar beast—the platypus of speech acts.

 

C

Share

Why Is There Nothing It’s Like to be a Rock?

 

 

Why Is There Nothing It’s Like to be a Rock?

 

 

We divide the world into two big classes: the conscious beings and the non-conscious beings. The mind-body problem concerns the conscious beings: we want to know what makes it the case that a being is conscious. This is an explanatory question: we seek an explanation for the presence of consciousness in certain beings—we want to know why those beings are conscious. More specifically, we want to know what properties of the brain explain the presence of consciousness. How does consciousness emerge from the brain? What features of the brain give rise to consciousness? And the problem is that we find it difficult to discover any features that explain the emergence of consciousness in those cases in which it emerges. The conscious beings seem similar to the non-conscious beings in all fundamental respects except consciousness, so we can’t see why we have consciousness in some cases but not in others. We need to identify something common and peculiar to all the conscious cases, but at the physical level we find homogeneity between conscious beings and non-conscious beings—it is all molecules in motion, roughly. By rights, we feel, the conscious beings shouldn’t even be conscious, given their similarity to the non-conscious beings: but they are—puzzlingly, inexplicably. Thus we declare the existence of consciousness in a physical world a mystery, more or less deep. We don’t know what accounts for the presence of consciousness in the world.

            I have no wish to rehash this familiar story here; my mission, rather, is to note that there is an exactly parallel problem concerning the absence of consciousness in non-conscious beings. Hence my title: why is a rock notconscious? This is also a mind-body problem—the problem of explaining the lack of mind in certain things. And the problem is the same as before: these non-conscious things are too similar to conscious things for it to be intelligible that one class of things has consciousness while the other does not. We don’t know why there is no emergence of consciousness in the things we take not to be conscious—which raises the possibility that these things might be conscious after all. What properties do rocks lack that makes us so sure they are not conscious? If consciousness can arise from physical brains, seemingly miraculously, why can’t it arise from other physical configurations in the same way? In any case, we have a problem explaining why consciousness is distributed as we suppose it to be, since we don’t know what explains its presence or absence.

            In other cases we can explain absence quite easily. If we want to explain the absence of liquidity in ice, we appeal to the fact that frozen water does not permit the constituent H2O molecules to slide over each other. If we want to explain why snow isn’t hot, we observe that it has low molecular motion. If we want to explain why rocks don’t photosynthesize, we point out that they don’t contain photoreceptors and chlorophyll. If we want to explain why fleas are not good at arithmetic, we note the absence of a complex brain. Since we know what makes a thing have these various attributes, we also know what makes a thing lack them. But we don’t know what explains the presence of consciousness, so we can’t cite the absence of that to explain the absence of consciousness. If electrical activity in the brain were the basis of consciousness, then we could simply cite its absence to explain why non-conscious things are non-conscious–but that is precisely what we don’t know. And if we tried to use that test, we would end up spreading consciousness far more widely than we tend to, electricity being virtually everywhere.

            The problem already arises within the organic world. Some organisms are conscious, but it is not generally supposed that all are—some insects and worms are not, nor are bacteria and viruses, nor are plants. Yet all are living things, some of them even containing neurons. How do we explain the lack of consciousness in these organisms, given its presence in other organisms? And if we can’t, isn’t it dogmatic to restrict the attribution of consciousness in the way we do? Aren’t we being entirely arbitrary in our ascriptions of consciousness, possibly even anthropocentric? We have an absence-of-mind problem in these cases: we can’t say what it is about the body of the organism that precludes it from being conscious—just as we can’t say what it is about the body of other organisms that guarantees that they are conscious Thus the mind-body problem takes two forms: explaining presence and explaining absence. Each form generates a potential skeptical problem: on the one hand, suggesting that consciousness might be an illusion, since it cannot be explained in terms of the body; on the other hand, suggesting that consciousness might be everywhere, since we can’t find a way to confine it as we customarily do.  [1]How do we rule out eliminativism and how do we rule out panpsychism? It isn’t that these extravagant doctrines are entailed by the two sides of the mind-body problem, nor even that they have any intrinsic plausibility, but they do arise naturally once the full explanatory problem is confronted. In particular, it isn’t easy to rebut the claim that consciousness might be more widespread than we tend to suppose—as with the popular possibility of consciousness in trees. For what is it that trees lack that excludes them from the class of conscious beings? They are multi-cellular, DNA-based organisms that adapt to their environment—why should the presence of a clump of squishy grey matter be regarded as the decisive criterion? We are accordingly admonished to keep an open mind.

            Putting arboreal sentience aside, we should acknowledge that the mind-body problem applies more widely than traditionally recognized—not just to things with minds but also to things without minds. The problem is an explanatory one, and it applies as much to the absence of consciousness as to its presence. What explains the absence of consciousness in the brain stem or in peripheral nerves but its presence in the cerebral cortex or other central nerve tissue? Why are bats conscious but not gnats (or slugs or jellyfish)? Why is there nothing it is like to be a rock? The null phenomenology needs to be explained as much as the brimming phenomenology. No matter how much we learn of a gnat’s brain or a rock’s mineral structure we will not see why these things lack consciousness. At the end of our inspection we will only be able to say, “It might be conscious for all we know”. We can’t say this after examining the internal structure of ice and wondering about its liquidity: it can’t be liquid given the molecular situation. By contrast, the lack of consciousness in some things is puzzling in view of the similarities between conscious things and non-conscious things. It is not that the presence of consciousness is deeply mysterious while its absence is not; both are mysterious.  [2] It is a mystery why rocks are not conscious (they contain molecules, just like conscious brains). We can explain why rocks can’t sing or walk or bend, but we can’t explain why they don’t have their own form of rock consciousness—just as we can’t explain why we do have our own form of human consciousness.

            The dualist claims that nothing about the body can explain the mind, so we need to recognize a separate reality to accommodate the mind. We should not expect to find anything in the brain that could add up to the mind. Applying this way of thinking to the rest of nature, we get the result that the inadequacy of bodies in general (including rocks) to ground consciousness is no reason to deny that they are conscious—all bodies might have an associated mind that exists in its own right and without benefit of bodily foundation. How can such a doctrine be refuted? Not by pointing out that brains are inherently capable of producing consciousness by virtue of possessing a special property P, while rocks conspicuously lack P—since we know of no such property. If we insist on absence in the one case but not in the other, we incur the charge of mental chauvinism. We can’t explain absence, according to dualism, by pointing to the lack of intrinsic grounding properties in the rock, just as we can’t explain presence by pointing to suitable grounding properties in the brain. Absence is as puzzling as presence. Thus we have a not-mind/body problem—the problem of explaining why some things don’t have minds. Even zombies have a mind-body problem—the problem of why they are zombies. I think myself that we can be confident that the way we distribute minds across nature is more or less correct (extending consciousness as far as many insects), so I am convinced that there is a deep difference between things that are conscious and things that are not.  [3] But I accept that I have no idea why consciousness is present in some cases and absent in others. To me it is a mystery why rocks are not conscious, though I am morally certain they are not. I just don’t know why there is nothing it’s like to be a rock.

 

Colin McGinn

  [1] I take these to be forms of extreme skepticism, analogous to other kinds of philosophical skepticism; I don’t think they are serious real possibilities. But they do point to areas of ignorance and opacity.

  [2] If for some reason we had decided (wrongly) that the brain is not the organ of consciousness, we would find nothing to challenge our opinion in the known properties of the brain. We would regard the brain as we now regard a rock—as somehow obviously not a seat of consciousness. The mystery is why some things are conscious and some things are not, given that they seem not to differ all that dramatically (the substance is the same).

  [3] That is, I believe we have adequate evidence for ascribing consciousness as we do (though all evidence is fallible), but I don’t believe we have any explanation for the presence of consciousness in some beings and the absence of consciousness in others. Thus the lack of consciousness is a mystery just as the presence of consciousness is a mystery; indeed, they are aspects of the same mystery.

Share

The Language of Emotion

                                                The Language of Emotion

 

 

Proponents of the language of thought typically don’t have much to say about emotion. We are said to deploy an internal language when we think, but it is not suggested that we do so when we feel. Internal speech is characteristic of thought but not of emotion—we don’t “feel in words”. And the same might be said of desire: the idea of a “language of desire” has not met with enthusiastic acceptance (or even formulation). Language has to do with the cognitive part of the mind not the affective. Perhaps theorists think that the affective part of the mind is what we have in common with non-linguistic animals and so is not an appropriate object for linguistic explanation; only thought calls for linguistic representation. Emotion and desire are like bodily sensations: no one thinks that pain and pleasure should be analyzed linguistically—to be in pain is not to say to oneself “That hurts!” and the taste of pineapple is not an inward utterance of “Lo, pineapple”. Emotion just doesn’t have this kind of intellectual sophistication: it has no grammar or logic, no internal discursive structure. Emotions, like sensations, don’t entail each other or have subject-predicate structure. So it may be supposed.

            But is that true? Take fear: we can fear that p as well as fearing x. For example, yesterday I feared that I would collide with a car that pulled out in front of me. Fear has propositional content: at the moment I slammed on the brakes I was afraid that I was about to have an accident. This is the same proposition that I believed to be true—in fact, I feared its truth because I believed it to be true. I thought that a collision was imminent and so I feared that a collision was imminent. We can recognize this connection between mental states without committing ourselves to a cognitive theory of emotion: it is simply a fact about our psychological economy. Of course, if emotions arethoughts (or essentially incorporate thoughts), then we can derive a language of emotion directly from a language of thought, but even without that assumption it is evident that emotions are (or can be) propositional. If emotions of fear have propositional content, then they have logical form, in virtue of the propositional object of the emotion. And so they have logical entailments—the content of my fear entailed, for example, that someone was about to have an accident. But then the case for a language of emotion is exactly as strong as the case for a language of thought, insofar as the latter case rests on the propositional content of thoughts. One of the main arguments for LOT is the productivity of thought, but emotions are also productive in this sense, since they invoke conceptually structured propositions—so we have the same argument for LOE. I can fear that I will not be selected for clemency just as I can believe that I will not be selected for clemency, and I can fear that I will be captured by the enemy and then tortured just as I can believe that conjunctive proposition. I can fear the same propositions that I can believe, including those built by logical operations like negation and conjunction. Thus emotions are logically structured, combinatorial, finitely based, and potentially infinite—just like beliefs and thoughts. If there is a LOT, then there must be a LOE.

            It might be wondered whether emotion verbs accept every complement clause that cognitive verbs accept. Can we fear everything we can think? Can we feel sad about every state of affairs that we can believe to obtain? Can we be disgusted by everything to which we can assent? For example, I can believe that necessarily 2 + 2 = 4, but can I fear that necessarily 2 + 2 = 4? Can I feel sad that gravity obeys an inverse square law? Can I be disgusted that Hesperus is Phosphorous or that modus ponens is a valid rule of inference? With sufficient ingenuity we could probably contrive situations in which each of these peculiar emotions could be felt, though they are certainly not part of the normal run of things. But we don’t need to establish full correspondence between thought and emotion in order to recognize that emotions have an extraordinary variety of complex propositional objects, and that they therefore qualify for linguistic analysis given that thoughts do. Just as we think in a language, so we feel in a language—the content of our emotions has a linguistic underpinning. Other animals may not, just as other animals may think without deploying an internal language (possibly in images). But human emotions, like human thoughts, have a degree of conceptual sophistication that invites the idea of a LOE. Indeed, if we call the human LOT “Mentalese”, we can say that the LOE is also Mentalese: we feel in the same language in which we think. Why would we (or our genes) deploy two distinct languages for these two tasks? And if the propositional character of emotions derives from their cognitive component, we would expect that Mentalese would simply carry over to LOE. Thought and emotion would then share a common underlying symbolic system, with the same grammar and lexicon.  [1]

            The picture that results regards Mentalese as an internal language suitable for deployment in both thought and emotion (as well as desire, since we have complex logically related desires too). We might take it to be neutralbetween cognitive and affective uses, not privileging thought over emotion. It is not that we first have a language specifically of thought and then co-opt it to serve our emotions; rather, we have a neutral language that can be deployed for both thought and emotion. The Mentalese language faculty is a psychological module ready to be exploited by different parts of the mind—a general machine that can be used for different purposes. It doesn’t have thought built into it any more than it has emotion built into it; it’s more abstract than that. It is a language of mindgenerally (LOM). Thus LOM can be employed as an LOT or as an LOE. Some theorists might wish to go even further in divorcing LOM from thought specifically by suggesting that emotion and desire are primary in the mind. These theorists might maintain that desire and emotion precede thought in evolution, and that they require a symbolic medium in order to achieve their purposes optimally. Thus there was an LOE before there was an LOT: LOT is a later adaptation grounded in LOE. Maybe LOE evolved in fish long before anything deserving the name of thought arrived; then thought came along and recruited LOE for its purposes. There is no need to privilege the cognitive just because one adopts an internal language theory of mental operations. To put it differently, a computational model of mind is not committed to taking thought to be primary in the mind. Conceptually structured emotions (or desires) might be more basic than conceptually structured thoughts. Emotions are clearly important biologically, as well as being ancient, and having a sophisticated structure clearly aids their effectiveness. The affective is discursive. 

            When it was believed that thoughts consist of mental images the idea of a language of thought held little appeal; similarly for the theory that thoughts are behavioral dispositions. It took appreciation of the propositional nature of thoughts for LOT to gain traction—theorists had to accept that a thought is always a thought that p. Likewise, if we think of emotions as bodily sensations (as with many traditional theories), or as dispositions to behavior, then we will not appreciate their propositional nature. But once we accept that fear and sadness are fear and sadness that p, we are prepared to accept that emotions are underwritten by an internal symbolic system. The important move in both cases is accepting the correct logical analysis of ascriptions of thought and emotion.  [2]Once philosophers had grasped how reports of thought worked they were ready to take the plunge into LOT, but they don’t seem to have appreciated that emotion reports are much the same, so that a dip into LOE might be indicated too.

 

  [1] There are also such attitudes as hope and trust: these are clearly propositional and close to belief and thought. If thought comes with an internal language, surely hope and trust do. But these attitudes have an emotional dimension, so we are already close to a language of emotion. In fact, the whole distinction between thought and emotion is quite artificial, so we should expect a general theory that subsumes both. 

  [2] I mean such things as referential opacity, the de re/de dicto distinction, the connection between entailment and logical form, the notions of sense and reference, semantic externalism, and so on. These are the things that encouraged philosophers to postulate a language underlying thought (Fodor needed Frege and Quine), but the same points apply to emotion and desire. The mind is thoroughly propositional, a subject of that-clauses.

Share

Freedom and Bondage in Psychology

                                   

 

 

Freedom and Bondage in Psychology

 

 

Chomsky has long urged that the use of language is stimulus-free. The point in itself is obvious, as many of the most important points are, but it stands opposed to entrenched ideas. I will make some remarks about its interpretation and significance. The notion of a stimulus was introduced in the nineteenth century in connection with psychophysics. Just as a physical cause leads to physical effects that can be studied and measured, so a physical cause leads to psychological effects that can be studied and measured. Just as we can establish quantitative physical laws relating physical magnitudes, so we can establish psychophysical laws that relate physical and psychological magnitudes. For example, we can investigate how visual sensations depend on physical properties of external objects, discovering that the intensity of the former varies with the intensity of the latter (The Weber-Fechner law). Here the stimulus is light and the response is a visual sensation. We thus arrive at the idea that physical stimuli produce and are lawfully correlated with sensations conceived as internal psychological occurrences: we can predict the latter from the former according to quantitative laws. The stimulus elicits the response—makes it happen in a predictable manner. This makes the relationship similar to that between purely physical causes and effects—as in Newton’s laws of motion. Just as mass and force lead to acceleration, so light intensity leads to visual vividness. We can thus model psychophysics on physics: both involve law-like relations between magnitudes, with stimulus and response acting like physical cause and effect. This gives the experimental psychologist something tangible to work on, namely mapping the laws that connect physical stimuli with psychological responses. A science can thereby be constructed.

            This basic model was extended by subsequent behavioral psychology: now the response is taken to be overt behavior not subjective quality, and the laws relate physical stimuli to observable behavior. But the basic theoretical model is the same: stimuli eliciting responses in a lawful quantitative manner. We might say that psychology, so conceived, is the science of the laws of elicitation—how stimuli elicit responses, reliably and measurably, just as physical causes elicit physical effects. Thus stimulus-response (S-R) psychology was born: the experimenter could vary the stimuli at will—their intensity, frequency, probability, and timing—and discover the properties of the response. S-R psychology is a kind of physics of the behaving subject. From this perspective it is not that the response is behavioral that is crucial (it wasn’t for psychophysics); it is that the response is something elicited by the stimulus—caused by it, predictable from it, tied to it. Hence theorists took to speaking of evokingresponses by stimuli or stimuli triggering responses: responses are stimulus-dependent, stimulus-bound, stimulus-controlled. There is no response but that a stimulus makes it so—nothing is free from stimulus bondage. It might take some ingenuity to find the stimulus, and we may have to accept merely stochastic dependencies, but fundamentally all behavior is determined by outside stimuli. The brain (mind) is accordingly an S-R mechanism—a device for transmitting stimuli to responses; it plays no autonomous role in generating behavior—everything traces to those eliciting stimuli. Everything psychological works in the same way as it does in psychophysics—this is the model and paradigm.

            It is against this background that Chomsky’s point is to be understood. For his point is that language use—speech acts, utterances—do not conform to this model, contrary to the precepts of S-R psychology. What a person says on an occasion is not controlled by the stimuli impinging on that person—not elicited, evoked, or triggered. It may be semantically related to the environment of the speaker, but it isn’t fixed by that environment—the person might say something completely different in an identical environment. Still less is the utterance shaped by the surrounding stimuli: it doesn’t vary according to variations in the stimulus, say by growing louder or softer, or more or less interesting. The properties of the “response” have nothing to do with the properties of the “stimulus”. We should drop talk of stimulus and response here altogether: the utterance is not a response to anything in the array of impinging energies. The S-R model is quite inappropriate to the case. It follows that the mind-brain is not an S-R mechanism, at least where language is concerned. It works according to quite different principles. Nor is this a matter of merely statistical S-R relations: there is no law at all connecting environment to utterance. The environment might suggest a remark to the speaker, but there is no sense in which it elicits a remark. By contrast, a physical stimulus can elicit a perceptual response, but it doesn’t “suggest” such a response. On no account should we conflate these two relations. Linguistic behavior therefore does not fit the S-R model, but requires a quite different theoretical treatment.

            All this strikes me as unexceptionable and I won’t defend it further. I am interested in extending the point beyond language and in exploring the resulting picture of the human mind (and maybe the minds of other animals). First, we should note that the point is not that all behavior or psychological response is stimulus-free: much of it is stimulus-bound and fits the S-R model. The patellar (knee) reflex is an obvious example, along with the blink reflex and the salivation reflex. Chomsky’s point is that speaking is not like that. Conditioned reflexes are the same: stimulus-bound not stimulus-free. Likewise for perceptual responses: the perception is elicited by the stimulus (distal or proximal) and its properties depend on those of the stimulus (though not exclusively). The perceptual systems are S-R systems (which is not to say they are simple or mechanical). What you see is what you are made to see not what you make to see; but you make utterances and are not made to. The mind is therefore composed of two sorts of system: bound and free. It is not all bound or all free, but a combination. Just as speaking is not like seeing, so seeing is not like speaking: seeing an object is not like speaking in its presence—it is not a kind of commentary on the passing show. It doesn’t have that kind of freedom; it is not a kind of language (“That looks a duck”). We must not make an error of assimilation in either direction. The mind is both reflexive and reflective; both bound and unbound.

            What else is stimulus-free? I suggest that thought is. What a person thinks is not elicited by his or her environment, though it may be suggested by it. Thoughts are not responses to environmental stimuli. A person can think indefinitely many different thoughts in the same stimulus situation, since thoughts do not occur because of triggering by the stimulus situation. Their causation (if that word is appropriate) is endogenous. Your thoughts can roam far and wide while your senses are stimulated thus and so. There is no supervenience of thought on outer stimuli. A stimulus may prompt a particular thought, but it may not; thought operates autonomously, without reliance on triggering stimuli. Beliefs may be elicited by perceptual stimuli, but thinking is another matter. So thought is like speech: stimulus-free and endogenously generated. Speech is an external action while thought is something internal, but both differ from perception and reflexes. In fact, I am inclined to suggest that language is stimulus-free because thought is stimulus-free: our thoughts lead to our speech acts (though not as stimulus to response) and our thoughts are stimulus-free, so our speech acts are likewise stimulus free. We say what we do because of what we think, but what we think is not controlled by impinging stimuli (unlike what we perceive); therefore free speech (in this sense) is underwritten by free thought. At some point in evolution the mind detached itself from S-R relations (a happy mutation!) and developed operational autonomy, thus producing thought; language then built upon that foundation, developing its own autonomy. The decoupling of world and mind is what stimulus freedom amounts to, and both thought and language share it. If we conceive of thought in terms of the language of thought, then the autonomy of thought rests upon the autonomy of the internal language—utterances in the language of thought are stimulus-free. But then it is the stimulus freedom of the inner mental utterance that is basic, not the outer vocal performance. In any case, language and thought are clearly intertwined when it comes to the impotence of the stimulus. And this is not a matter of the “poverty of the stimulus”, because thought and speech are not responses to stimuli at all, however poor and meager. They don’t result from an inadequate stimulus but from no stimulus. S-R psychology simply doesn’t apply to them.

            A certain kind of dualism takes shape around these observations—a dualism within the mind. Human minds (and maybe others) have two very different kinds of faculty, according as the faculty is stimulus-bound or stimulus-free. S-R psychology applies to one sort of faculty, including perception and motor reflexes (conditioned and unconditioned), but it fails to apply to another kind.  [1] We must not try to assimilate one kind to the other in either direction. We would therefore expect that different sorts of theory are appropriate to the two sorts of faculty, with one sort mirroring theories in physics and the other not. As I remarked, psychophysics modeled itself on Newtonian mechanics; but the theory of thought and speech will not be like that, because of inherent stimulus freedom. This prompts the question of why such a division exists: what is it about speech and thought that makes them stimulus-free? Is there something about their intrinsic structure that leads to their stimulus freedom? The obvious place to look is productivity—the potential infinity of sentences and thoughts grounded in finitely many combinable elements. Is this essential creativity the reason for stimulus freedom? Or are these two independent features of speech and thought? The natural hypothesis is that they could not be productive in the way they are if they were stimulus-bound. Suppose we try to imagine a speaking creature whose speech acts are as rigidly governed by eliciting stimuli as our reflexes and perceptions are. This creature cannot speak unless it is triggered into doing so by an impinging stimulus, as we cannot see without being stimulated to see. Its verbal behavior is subject to S-R psychology. And this is not just a motor limitation; it can’t even enunciate sentences in its head without an external stimulus. This is hard to get one’s mind around, so alien is it to our own relation to language. One feels the creature would just be blurting noises out, not speaking. We construct our sentences as we utter them (or a bit before), but these putative speakers are not constructing their utterances, just emitting them.  [2] We actively produce our utterances according to plan, but they passively emit noises when a stimulus strikes them. They can’t help speaking, but they can’t speak at will either. How could their utterances be productive if they don’t produce them—if they are extracted not created? The faculty of speech is essentially creative, given productivity, but then it can’t be elicited by stimuli in the way perception and reflexes are. Thus speech is necessarily stimulus-free. Active combination is its essence, but that is not compatible with the S-R model. We make utterances; they are not elicited from outside. And the same is true of thought for essentially the same reasons: thoughts are actively constructed complex entities not pre-existing fixed entities; so we can’t conceive of them as triggered in the S-R style. If thoughts were so triggered, they would not be thoughts, but something more like perceptions. What it is to be a thought involves stimulus freedom; this is not a contingent feature of thoughts. Thought, like speech, is essentially free—not an inescapable response to a exigent stimulus.  [3]

            We can therefore say that the form of sentences and thoughts is internally related to their being stimulus-free. If we call this form “grammar”, we can say that grammatical events are necessarily stimulus-free: what is composed in a certain way must come about in a certain way—freely, not by bondage to a stimulus. Form dictates etiology. Given that sentences have grammar, they must be stimulus-free; the price of stimulus bondage is loss of grammar. We can certainly imagine a machine that produces sounds like the sentences of a human language when stimulated to do so, but that is not to say that those sounds have grammar. In order to have grammar the sounds must be produced in a certain way (a derivational history), and that way excludes stimulus elicitation. The question is like asking whether genuinely creative acts could occur as responses to stimuli, and the answer is that these are incompatible attributes. Likewise, it is strange to suppose that a perceptual state could come about without a triggering stimulus—in the way that we speak and think. For that is incompatible with its defining attribute: a free-floating visual percept, produced at will, independent of any eliciting stimulus, would not be a genuine percept, but something more like a markedly visual thought. Perceptions must indicate the actual impinging environment and hence be responses to stimuli from that environment; they cannot be stimulus-free in the manner of language and thought. You change the nature of the thing if you change its mode of occurrence (or the laws of its occurrence). Language and thought, with their distinctive intrinsic structure, don’t just happen to be stimulus-free–as perception doesn’t just happen to be stimulus-bound. This distinction of mode of occurrence marks a deep ontological division in the mind. We might even say that we have two minds: an S-R mind and a non-S-R mind. We feel this division in ourselves all the time, as we register perceptions triggered by the environment and entertain thoughts stemming from who knows where. We consist of reflexes and spontaneities—bondage and freedom. We are slaves to stimuli and yet we are able to rise above them; we are not purely one thing or the other.

 

  [1] Readers who sense an affinity between what I say here and Fodor’s The Modularity of Mind are not mistaken. Encapsulation and reflexivity are connected properties.

  [2] Reflexive vocalization occurs in us too: we blurt out a vocal response to a stimulus, e.g. when injured and in pain. But these elicited vocalizations are not structured utterances, merely cries. This is not reflexively triggered speech.

  [3] I have not discussed other mental phenomena that raise the same question: dreaming, mental imagery, seeing-as, memory, knowledge of various kinds (perceptual, mathematical, moral, modal, introspective), logical reasoning, desire, emotion, intention, aesthetic response, and whatnot. About each of these we can ask whether they are stimulus-free or stimulus-bound (or possibly a mixture of the two), but I won’t attempt to answer this list of questions now–except to state my view that dreaming is free and memory is bound.

Share

How to Solve the Problem of Other Minds

                                    How to Solve the Problem of Other Minds

 

 

Brain splicing—that’s how. Suppose I want to know whether you have a mind, and if so whether it is like my mind. I am particularly concerned to know whether you have visual experiences like mine. So I arrange to have part of your brain transplanted into my brain (let’s suppose the technology is available): I have my visual cortex excised and yours inserted where mine used to be. If you are a zombie with no visual experience, then the brain splicing will leave me without visual experience. But if your brain is like mine and does generate conscious experience, then I will regain visual experience after the operation (and you will lose it). I can therefore conclude that you are (were) a conscious being. Moreover, I can resolve the question of whether you see the world as I do: I can determine whether, say, you have an inverted spectrum. If the splice leaves me seeing colors in the same way I saw them before, then I can conclude that you see (saw) colors as I do; but if I start seeing what I used to see as red as green, then I can conclude that you had an inverted spectrum. I wanted to know whether your brain has a property that I couldn’t determine that it has by ordinary observation, because of the inherent privacy of that property, so I simply join a part of your brain to my brain and see what happens experientially, thus determining whether or not your brain has the property in question. I use introspection to settle the question, aided by brain splicing. Thus I resolve the other minds problem once and for all. I perform an experiment the outcome of which is knowledge about which other beings have minds.

            I could do the same with non-human brains. I just have bits of these brains spliced into mine and then I record the results. I could come to know what it is like to be a bat this way. I could settle the questions of reptile and insect consciousness. I could even in principle perform the splicing test on trees (with some pretty fancy equipment). All I need is a way to hook brains up with other brains. The case is just like having a brain transplant made of non-biological materials—such as a silicon-based replacement for my failing visual cortex. The replacement might duplicate the functional properties of my old nervous tissue, but it may also fail to generate the experiences I used to enjoy. Experimenters could try out different materials to see which, if any, produce consciousness (maybe only neural tissue just like the original does). My brain has the property of consciousness, which I can detect by means of introspection; so I just need to join another brain to mine in order to find out if it shares this property. There is nothing conceptually problematic about this, merely technically infeasible (at present). So I can in principle solve the problem of other minds: I can finally really know whether other minds exist. It will not be a matter of inference or conjecture anymore, but of introspective fact.

            Someone might object that I can’t strictly infer the existence of other minds from a positive outcome in the splicing experiment, because it might be that the transferred tissue only becomes conscious when it leaves the other person’s head and enters mine. It was insentient in its original home, its possessor being a complete zombie, but inserting it into my wonderful head makes it light up with consciousness. All I strictly know is that it is conscious now, not that it was conscious before; so I can’t infer that its original owner must have been conscious. The point must be conceded–we have no strict logical deduction here. But surely the far more plausible position is that the brain tissue has the same causal powers in both locations—your head or mine. It is entirely reasonable to suppose that what I notice in myself already existed in you. Why should my head, which is just like yours, be able to inject consciousness into your brain tissue, while yours could not? No, if your brain tissue generates consciousness in me, then it also generated consciousness in you—you were conscious all along. Are we to suppose that if I give the tissue back to you it will suddenly cease to have the consciousness that it had when it was in my custody? We should rather accept this law: if a brain part is conscious in a particular subject, then it will be conscious in any other subject; and if it is not conscious in a particular subject, then it will not be conscious in any other subject (ceteris paribus). So we shouldn’t worry that the logical point undermines our ability in principle to solve the problem of other minds.

            The problem of other minds is therefore in principle solvable. None of the other standard skeptical problems could be solved in this way—there is no point in splicing a physical object into my brain to see if I can tell whether it has the property of being real. That will just raise the old problem again: how can I infer from impressions of external objects that there are external objects? It seems to me that I have a table spliced into my brain, but that might be a dream or a hallucination. I have no possible mode of access to physical objects save through sense perception, but that leaves me vulnerable to skepticism. But I do have a possible mode of access to other minds that precludes skepticism, just as skepticism can be precluded for knowledge of my own mind—for I can determine the existence of other minds via brain splicing and introspection. This is the only philosophical problem I can think of that could in principle be resolved experimentally. As things stand, we must remain in doubt on the question, given our existing modes of access to other minds; but in principle the question could be resolved quite decisively. I find that rather comforting.

 

Share