Barbados Trip

I am going to Barbados on August 2nd for two weeks and will not be posting anything during that time. So, my absence is not an indication  of some sort of disaster. I will, however, have access to comments.

Share

On “What is it like to be a Bat?”

On “What is it Like to be a Bat?”

Thomas Nagel’s great paper “What is it Like to be a Bat?” begins resoundingly enough: “Consciousness is what makes the mind-body problem really intractable. Perhaps this is why current discussions of the problem give it little attention or get it obviously wrong.” These words slip smoothly off the tongue and make an impression. They have certainly been influential. The paper itself can be credibly credited with beginning the current craze for consciousness studies (I am part of that craze myself). However, I have come to believe that they are massively misleading and in fact completely wrong (!). This doesn’t detract from the cogency of the main argument of the paper, since it can be otherwise formulated; but it does affect the correct interpretation of that argument and its relevance to consciousness. In short, I don’t think the argument has much to do with consciousness as such, however we choose to interpret that word. I am well aware that these are heretical statements, so I propose to take it slowly and carefully. We must pay strict attention to the language.

When Nagel says that consciousness is what makes the mind-body problem really intractable we must take him to be contrasting the attribute of consciousness with other possible candidates for centrality. What might these be? He doesn’t say, so we must fill in the gap; these other attributes of the mind are supposed less intractable. He doesn’t claim they are not intractable at all; he implies that they are not really intractable—notas intractable as consciousness. It is consciousness that provides the severest form of intractability. I think we may presume that what he has in mind, or would mention if pressed, are such attributes as intentionality, privacy, invisibility, privileged access, infallibility, freedom, indivisibility, and unity—anything philosophers have supposed to characterize the mind distinctively. It is the fact that the mind is conscious that really puts the cat among the pigeons—makes the mind-body problem deep and hard. Presumably, then, the unconscious is not among the things that make the problem especially intractable. There are, according to Nagel, a number of aspects of the mind that pose mind-body problems, some of them quite intractable, but none as intractable as consciousness: it is the nub, the heart, the nerve. Thus, we need to address it specifically: we need to ask what it is about consciousness that makes it so hellishly problematic. Then we will see the true magnitude of the mind-body problem, and also understand why materialists steer clear of mentioning it—for they see the extreme difficulties it poses for their program.

The first question we must ask is what Nagel means by “consciousness”. This is not at all obvious initially, because that word and its cognates have different uses and definitions. One might be forgiven for not knowing exactly what Nagel means by “consciousness” in his opening line. He does, however, quickly step in to clarify the matter by employing the phrase “what it is like”: to be conscious is for there to be something it is like for the creature in question (bat or human). As he later acknowledged, he got the phrase from Brian Farrell and Timothy Sprigge, though it is now strongly associated with him. It certainly has a ring and pungency to it (a meme waiting to happen). There are questions about whether all conscious states have the property in question,[1] but Nagel focuses on perceptual sensations in the body of the paper, so we need not be concerned about its general applicability—except to note that some instances of consciousness may not be problematic in virtue of having the designated property. At any rate, sensations have it paradigmatically. Whether what it’s likeness really serves to define consciousness in general is a moot question; what matters is that some mental states have it and raise questions discussed in the paper. It becomes the operative notion as the paper proceeds. We need not mention consciousness explicitly again; we can stick to the property picked out by the phrase in question. It turns out, then, that the issue concerns this property in relation to perceptual sensations, particularly those of bats when they echolocate. Whether this property is really necessary and sufficient for consciousness is beside the point; the problems will still arise even if it is neither (as some have contended). The arguments go through just as well without talking a firm stand on the issue, though it is heuristically helpful to link the notion to consciousness. But then the talk of consciousness is playing no integral role in the argument. This is good in a way, because we wouldn’t want those arguments to depend on a dubious definition of consciousness (as devotees of higher-order thought theories have suggested). We can detach the two questions. This is also helpful if we wish to extend the argument to unconscious mental states: for it seems questionable that unconscious mental states are perfectly tractable, or more tractable than consciousness. The underlying problem concerns sensations, conscious or unconscious; that’s what the paper is really about (it could have been called “Echolocation Sensations and the Mind-Body Problem”). It is what it’s likeness that is really intractable for materialists—what Nagel later calls subjectivity. For this property can only be grasped by people who share the property, unlike physical properties. But then talk of consciousness drops out—and so does consciousness itself if we reject the definition Nagel offers. The whole issue can be formulated without even using the word “consciousness” or any synonym. The crux of the problem, according to Nagel, is the epistemic subjectivity (self-centeredness) of the property of what it’s likeness; what this has to do with consciousness can be left open. You can be a complete skeptic about consciousness, an eliminativist, and still accept Nagel’s argument; or you could be a higher-order thought theorist; or a self-ascriptive speech act theorist; or an electrical brain wave theorist—you would still have to contend with the point that there is something it is like to be a sensation, and hence a self-centeredness to mental concepts not matched by physical concepts. To put it bluntly, the argument really has nothing to do with consciousness; the concept is not essential to it. The first sentence of the paper could have been “Sentience is what makes the mind-body problem really intractable”, given that the argument will concern perceptual sensations—though we would need to define “sentience” so as to include pre- or sub-conscious perceptions if they also can only be grasped by sharing the perceptions of the other.[2]

Why did contemporary theorists avoid talk of consciousness in their defenses of materialism? Is it because they sensed trouble lurking there, as Nagel suggests, or might it be that they found it hard to give a precise meaning to the word (is being awake that much of a problem?)? Perhaps they thought it was best understood as self-ascription of psychological predicates, in which case they felt they already had that under their belt. Maybe there was no evasiveness, just confidence that they had it tamed. The problems concerned the actual features of mental states (e.g., intentionality) not whether people were conscious of them. So, I doubt intellectual dishonesty was at the root of their neglect. They might indeed have been mightily impressed with Nagel’s basic argument, but just doubtful about what it had to do with consciousness as they understood it (“self-perception” or some such). The concept of consciousness plays no real role in Nagel’s argument and invites irrelevant objections (“I don’t agree with your definition of consciousness”). What is true is that what it’s likeness is linguistically awkward, perhaps conceptually awkward, so that we have no ready term to use when mounting arguments about the denoted property. Thus, we latch onto “consciousness” as a convenient shorthand; but the dictionary provides no such definition[3] of the term and it is vulnerable to attack for obscurity and indeterminacy of extension (do memories have it?). The whole topic is a linguistic mess, frankly. It therefore thrives on buzzwords and scare quotes. The issues are real but the language is sloppy and inadequate.

I have talked about the first two lines of Nagel’s classic paper, but what about the title? It too gives a misleading impression, though it is obviously catchy. Many people seem to think it is about the problem of other minds; it isn’t. It’s about the mind-body problem and the prospects for physical reduction. But we would certainly be within our rights to observe that it is not about what it’s like to be a bat, or for any animal to be the animal it is; it is about what certain sensations are like, and are. We know what it’s like to be a bat in many respects, but that isn’t the question; the question is whether we can grasp the concept of echolocation experiences given that we can’t echolocate. It isn’t about understanding other animals—a worthwhile endeavor—it’s about our concepts of experience and their dependence on our own specific “point of view”. I’m not criticizing Nagel for giving his paper the title he did—a title is just a title—but careless readers might easily get the wrong idea. Calling it “Can We Know What Echolocation Experiences are Like?” would be more accurate, if less memorable.

Have we been barking up the wrong tree in pursuing “consciousness studies”? It might be said that the points I have raised are merely verbal; we all know what we mean and our talk of consciousness can always be paraphrased away in terms of subjectivity and what it’s likeness. There is much truth in this sanguine assessment, but it is alarming that we have been carried away in recent years by sloppy language and lazy thinking (I include myself). The word “consciousness” has become a sexy meme, a type of profundity-signaling, a form of advertisement. Nagel’s words had more power than he knew.[4]

[1] Suppose I am conscious that I am late for a meeting: is there something it is like to be in this mental state that is common and peculiar to all instances of it? Won’t different memories and emotions go through the minds of different people in the same state? And is this what it is to be conscious you are late? How is what it’s like related to the propositional content of such a conscious state? The notion is infuriatingly vague and elusive when applied generally.

[2] Perhaps it would be better to say that there are several attributes of the mind that create difficult, even intractable, mind-body problems, drawn from the list I gave; consciousness is one of them. There are many mind-body problems not a single problem focused on consciousness (whatever we choose to mean by that word). Nagel put one neglected problem on the map (epistemic egocentricity, to give it a name), but then the map became too dominated by this problem, now grandly called the problem of consciousness.

[3] The OED gives “aware of and responding to one’s environment” for “conscious”; no mention of what it’s likeness. Clearly, there is no synonymy here.

[4] One often hears people talking about the wonderful new subject of “human consciousness”, as if psychology and philosophy have at last come to grips with what matters to all of us—our own lives and experiences. This is obviously completely wrong. It isn’t all warm and fuzzy and humanistic; it’s about whether animal experiences can be reduced to brain processes—a very old and quite dusty academic subject. The mind-body problem isn’t about getting yourself off the sofa to clean up the kitchen when you really don’t feel like it.

Share

Conscious, Unconscious, and What It’s Like

Conscious, Unconscious, and What It’s Like

What is the relationship between consciousness and what it’s like?  There are two questions: is the latter a necessary condition of the former, and is it sufficient? We could add a third question, which is really the central question: what is the real essence of consciousness—its intrinsic nature, its mode of being, what it is. It is sometimes said that what it’s like is not a necessary condition, since abstract thoughts don’t have it, or beliefs, or theoretical understanding; yet they are conscious. Less commonly, it has been argued that it is not sufficient, since unconscious mental states have it. The intuition is that what it’s likeness doesn’t logically entail consciousness. Here we must be careful about the word “conscious”: if it just means “feeling”, then surely what it’s likeness entails consciousness, since consciousness is a matter of how something feels. But if we mean something more like “being conscious of” or “consciously known”, then intuitions waver: why should we necessarily be conscious of or know about our feelings? Couldn’t there be something it is like to be in a certain unconscious state and yet the person not be conscious of this, not know about it. Sensations don’t logically require such higher-order states in order to exist. There seems no contradiction in the idea that unconscious mental states have subjectivity (“likeness”) but are not the objects of any conscious acts of knowing. Take the case of seeing a bunch of eight dots but not counting them: couldn’t it be true that the sensation is distinctively of eight dots and yet the perceiver doesn’t know this and is not consciously aware of it? The concepts seem to allow this degree of daylight between them: we can separate the concept of what it’s likeness from the concept of being conscious of—the former not entailing the latter. Also: couldn’t you be in a general state of depression and not be conscious of it? Isn’t the mind a two-level system—primitive awareness of things and a more sophisticated awareness of that awareness? Can’t there be subjectivity without consciousness? After all, there can be degrees of consciousness but not of subjectivity. An animal may be more or less conscious, but it can’t be more or less a being there is something it’s like to be. I can be in a state of being more or less conscious of what is going on around me (or within me), but it makes no sense to say that one sensation has more what it’s likeness in it than another. The terms have a different “logic”.

These are treacherous waters; it is easy fall into conceptual confusion. I want to advance the discussion by making a (relatively) clear point. Suppose I want to know the nature of a bat’s unconscious mental states as it echolocates; I already know I can’t know the nature of its conscious echolocation states. Can I know that? Reflection shows that I cannot know it in the unconscious case either: for the unconscious mind is not reducible to purely physical states of the bat’s brain, which I can know. Yet I might be reluctant to describe the bat’s perceptual unconscious using the phrase “what it’s like”, since I don’t think unconscious mental states are like anything. Then why do I think I can’t know the bat’s unconscious mind in this case? Because if it wereconscious, I wouldn’t be able to know it. It is of a type such that conscious expressions of it are not graspable by me. I am not supposing that it has what it’s likeness built into it when unconscious; I am supposing that if it were to become conscious, I would be unable to grasp its nature—as I am of the bat’s conscious echolocation experiences. I therefore think that if my inability so to grasp is an indication of irreducibility then the unconscious states are as irreducible as the conscious states. I think they are essentially of the same nature, with one conscious and the other unconscious. Thus, the bat’s perceptual unconscious is as much an obstacle to physicalism as its perceptual consciousness, so far as my reductive abilities are concerned—because I understand its unconscious via my understanding of its consciousness, which in this case is limited.

Perhaps the point will be clearer if I talk about Freud. You are told that the child has an Oedipus complex: he sexually desires his mother, but this desire is unconscious and always will be. How do you understand what is being said? On the face of it, unconscious desires are strange and incomprehensible things—you know what conscious desire is, but what is the unconscious kind? The answer is obvious: you know what the unconscious desire would be if it were conscious. You think, “The unconscious desire is just like the conscious expression of it (except for being unconscious), and I know what that is”. You use your knowledge of consciousness to form an idea of the unconscious (but not as conscious). But this means that the limitations of your knowledge of consciousness carry over to your knowledge of the unconscious—that is, you can’t know more about the contents of the unconscious than you can about the contents of consciousness. If a psychanalyst starts talking about the unconscious of Vulcans, you will get lost when she comes to describing unconscious mental states completely alien to you, precisely because of your own restricted consciousness. You are epistemically limited by your own (contingent) phenomenology in both cases—the conscious minds of others and their unconscious minds. This is a roundabout way of saying that you have a subjective (self-centered) conception of the unconscious. But that means that the unconscious is just as recalcitrant to physicalism as the conscious, neither more nor less. In other words, we don’t have to attribute what it’s likeness to the unconscious in order to derive the result that the unconscious is as problematic as the conscious. If the latter is mystery, then so is the former. If the unconscious were intrinsically endowed with what it’s likeness, then certainly it would pose the same explanatory problems as consciousness; but it doesn’t have to be so endowed in order for the same conclusion to follow. All we need is the assumption that we understand the unconscious via the conscious, by considering what the unconscious state would be like if it were conscious. And really, we have no other way—without this we would draw a complete blank. To repeat: I know what a particular unconscious state is by knowing what it would be like if it were conscious—it would be like this. If I couldn’t form this thought, I wouldn’t know what Freud was even talking about.

This is quite a strong result, because it tells us that consciousness is not uniquely problematic among the denizens of the mind. The unconscious shares the enigmatic character of consciousness as conceptually perspective-dependent: only someone mentally similar to the other can form the requisite concepts. It also underscores the point that we are conceptually impoverished with respect to the unconscious: we really have no conception of its intrinsic nature save by invoking our grasp of consciousness (ironically enough). So, our grasp of it is doubly limited: it is parochial and it is extrinsic (indirect, superficial). In the case of the bat, we can’t grasp the experiential type and we can’t grasp what kind of thing an unconscious mental state of that type (or any type) is. We are limited in the former way with respect to conscious experiences, but at least we have knowledge of the nature of consciousness itself (we can feel it inside us). We have a single mystery for the bat’s conscious experience but a double mystery for its unconscious experience (if we allow this word for unconscious perceptual states). Realism about the unconscious implies mystery as much as realism about the conscious does, even more so. This stands in contrast to anti-realist positions about the unconscious—that it is simply the physical brain, or mere dispositions to conscious states, or just a useful fiction. In particular, we don’t need to claim that the unconscious has what it’s likeness in order to argue that it has the problems ofwhat it’s likeness. The epistemology is much the same either way. We are not cognitively better off trying to understand the unconscious than trying to understand the conscious; in fact, we are worse off.[1]

[1] This paper goes with my “An Even Harder Problem”. I am aware that these are intricate and taxing questions that strain comprehension.

Share

An Even Harder Problem

An Even Harder Problem

People like to talk about “the hard problem”, meaning the problem of consciousness. The phrase itself invites scrutiny: it contains the definite article and thus implies uniqueness, unlike “a hard problem”; and “hard” is an attributive adjective associated with cognate words, i.e., “harder” and “hardest”. So, we must ask: is the problem the only hard problem, and what problems is it harder than. I am concerned here with whether there are any problems that are harder than it: it may be hard but that doesn’t imply that no problems are harder; it might be, logically, that many problems are harder. It would then not be the hard problem singular, but just ahard problem among other hard problems, some of which are even harder. It might even be an easy problem compared to these, and certainly not harder than they are. In a ranking of problems, it might be somewhere in the middle—harder than some, easier than others. The really hard problems might sniff snootily at it and call it a doddle or other nasty names. It might even be terminal for human minds but still not that hard in the broader scheme of things that includes problems not soluble by any conceivable form of intelligence. If it turned out that the majority of problems were harder than this problem, it would be semantically correct to call it “an easy problem”, because to be a hard problem it would need to be harder than most problems, and it isn’t. The phrase itself begs many questions and may not be very helpful in the long run.

What other problems might be deemed hard problems? Many problems are hard relative to the capacities of chosen types of intelligence; they are all hard relative to some. Hardness is a relational characteristic, being an epistemic notion in its current use. For us humans now, we can list a bunch of notoriously difficult problems: the origins of space and time, the nature of gravity, the origins of life, the possibility of free will, the biological point of dreams, the workings of creativity, the nature of mathematical knowledge, the grounds of ethical judgment. What, by contrast, are the easy problems? I suppose we could list many of the problems of astronomy, physics, chemistry, and biology: the structure of the solar system (heliocentric), the basic laws of mechanics (Newtonian), the chemical composition of water (H2O), the origin of species (Darwinian). These problems have been solved and are thus ipso facto easy, unlike the list of so-called hard problems. In the case of evolution, we might even say that the problem is intrinsically easy, though it took a long time to hit upon it: it’s just a matter of differential selection applied to antecedent species (see Darwin’s Origin of Species). Our question must be: are any of the hard problems harder than the problem of consciousness?

There is one feature of the problem of consciousness that stands out: we know what it is. We have, as the old philosophers used to say, an “adequate conception” of it, a “clear and distinct idea”. We are indeed intimately acquainted with consciousness, nothing more so, since we live with it every day. We don’t say “Consciousness, what is that?” (compare dark matter, or even electricity). We don’t know consciousness just by its effects, or merely structurally, or purely functionally; we know it intrinsically, personally, as it actually is. It is therefore surprising that we are also so ignorant about it: you would think it would be an easy problem! You would think we would say, “Oh consciousness, yeah, I know all about that, it’s just XYZ”. You would think its relation the brain, which we also know about directly, would be perfectly transparent, intelligible, and long since figured out. But it isn’t so. The continuing puzzlement is itself puzzling. The point I want to make here is that the fact of direct knowledge suggests that other problems might be harder simply because we have no direct knowledge of their subject matter. Take the case of causation: ever since Hume we have accepted that causation is deeply puzzling, a mystery to the human mind. We believe in it but we don’t see it: we have no impression of the necessary connection in which causation consists; we don’t know its inner nature. We are acquainted with constant conjunction and individual events but not with the causal glue that binds them. The case is precisely unlike the case of consciousness, with which we are intimately acquainted. This puts causation on a different level from consciousness: we are not even in the know about what it is—we just have a word for we-know-not-what. This might well make the problem of causation harder than the problem of consciousness in the end; at least we can test theories of consciousness against our ordinary knowledge of its nature, which is what we can’t do with respect to causation. How could we verify or falsify a theory of causation? We perceive consciousness in ourselves whereas we don’t perceive causation anywhere.

I am softening you up for the really interesting case, which I hesitate to unveil. Is there anything about the mind that is also unseen and unperceived but which raises similar problems to consciousness? Is there anything that stands to consciousness as dark matter stands to matter? If so, it might well present the same problems as consciousness and then some—it would be an even harder problem. Would it be the hardest problem of all? That would be a bold and reckless conjecture, but it might well be a lot harder than the problem of consciousness—it might be really really f***ing hard. The question is worth asking, even if it is impossible to deliver a definitive answer. I am thinking, of course, of the unconscious—of the part of the mind that lies beyond the reach of introspection.[1] We might call it the “unknown mind”—the mind that is merely postulated not perceived, hidden not apparent. What are its characteristics? We don’t know—it’s hidden from introspective knowledge—but we can responsibly speculate. It is mental after all. First, it must surely have intentionality: be about things other than itself, representational, symbolic. Second, it must be similar to the consciousness with which it interacts and which it parallels; it may even slide into consciousness occasionally. Take unconscious perception: sub-threshold perceptions of color, say, must have a nature similar to supra-threshold perceptions. That is, they must have a phenomenology; there must be something it is like for them to exist—though we are not consciously aware of it. Aren’t pains we are not currently aware of also pains? Suppose this is so: the unconscious mind is both intentional and phenomenological. Then we can say that it has these characteristics without benefit of conscious awareness of them (as dark matter is presumably extended, though not visibly so). So, it is unlike consciousness in not being a datum of awareness; we are not directly acquainted with it. Yet it presents much the same explanatory problems as consciousness without such direct awareness. We have no “adequate conception” of it, knowing it only by inference, structurally, functionally. We really don’t know what we are talking about. So, the hardness of the problem is multiplied: it has the problems of intentionality and phenomenology but without our having any real grasp of the subject matter of the problems. This makes it harder than the hard problem of consciousness; the unconscious is the really hard problem of the mind. Sure, consciousness is hard (a lot harder than evolution), but it isn’t uniquely hard (that “the”), and it isn’t even the hardest of the hard problems concerning the mind. The not conscious mind is arguably harder, more recalcitrant. In linguistics finding an adequate grammar of conscious language is pretty damn hard, but discovering the grammar of the unconscious aspect of our language is even harder, because it is hidden away in the unconscious part of the mind. What is the generative grammar of the unconscious language of thought? That is a problem even harder. When the mind operates unconsciously it becomes even harder to understand than when it is open to view.

And what is the hardest problem of all? I don’t know. We have quite a few to choose from. The problem of consciousness is hard (harder than many problems); the problem of the unconscious is harder still (arguably): but what problem puts these to shame in the competition for supreme hardness? My money is on the origins of space and time, because I have absolutely no idea where you might even begin with this problem (or pair of problems); they certainly didn’t evolve from earlier species of space and time, or from bubble gum, or from God stuff. I don’t even think we have much idea about what they are. It’s a hard problem what the hardest problem is, but probably not the hardest problem.[2]

[1] I write about this in “The Mystery of the Unconscious” in Philosophical Provocations (2017).

[2] I think “the hard problem” is a phrase that appeals to people who don’t know much philosophy, or much science for that matter. Philosophy is full of hard problems (compared to animal husbandry, say) and science also faces many unsolved problems. It is hardly illuminating or informative to call the problem of free will or the problem of skepticism “hard problems”—of course they are, they have been around for thousands of years. They are hardly “easy problems” compared to other problems already solved. The phrase is more of a meme than an insight. It is an outright banality—junk thought. People think they are being profound when they say it; in reality, it is platitudinous at best.

Share

Ethics and Other Minds

Ethics and Other Minds

There is a close connection between ethics and the problem of other minds, which I have not seen remarked (though it is obvious). Belief in the latter supports commitment to the former; skepticism undermines it. If you don’t believe in other minds at all, you can have no morality worthy of the name (except self-directed duties). An obligation to someone is an obligation to them qua mental entity—person, sentient being. One is not morally concerned about zombies. Contrariwise, if you feel a strong commitment to ethics, you will have a firm belief in other minds—this might even override philosophical skepticism. Ethics might be so important to you that you dismiss such skepticism without a second thought. If you were born with morality in your genes, you would be born with belief in other minds in your genes. This reason is independent of the explanatory uses of this kind of belief system (“theory of mind”). But of course, other minds are not a given, not like your own mind; there is no Cogito for other minds. Suppose there were: then morality would have a firm foothold in your mind, other things being equal. It would be as solid as prudence, epistemically. If, per impossibile, these were inverted, then prudence would be as vulnerable to skepticism as other minds are now, and morality would be as firm epistemically as prudence. That is, one’s own self would be as epistemically remote as other selves, and others would be epistemically close. As it is, however, the problem of other minds hovers over the authority of morality, so that it faces an uphill battle. Any reason to doubt other minds is a reason to doubt morality.

We humans do not have a solid general grasp of other minds, some of us less than others. Some people, apparently, don’t really believe in other minds as they believe in their own. They are called narcissists or psychopaths or autistic. None of us grasps animal minds as well as we grasp human minds. We tend to grasp the minds of those close to us better than the strange and distant. There is a pronounced proximity effect or similarity variable. This weakens our sense of moral obligation, empathy, concern for others. The more remote or alien a mind seems to us the less we feel moral sentiments regarding its possessor. And any cognitive deficiency in our conception of other minds will affect our moral attitudes. A consequence is that intelligence will tend to be correlated with moral attitudes and actions: the greater the intelligence, the greater the virtue (ceteris paribus). Very dumb people will not be morally responsible. Any culture that underplays the reality of other minds will weaken moral rectitude. Thus, we need education in the existence of other minds if we are to secure a sound morality. De-personalizing others must be discouraged and deterred. An anti-realist behaviorism about other minds must be strongly resisted. Tendencies towards solipsism must be fought against. T-shirts saying “Other Minds are Real” should be commonplace. Remedial therapy should be state-sponsored. It doesn’t much matter if people doubt the external world, because inanimate objects don’t suffer, but doubting other minds can easily lead to barbaric behavior. This is why political persecution is always accompanied by claims of psychological inferiority or attenuation (“They can’t suffer like us”). We need to be firmly convinced that other people have souls just like us (animals too). A political party should have in its manifesto the principle that other minds are real—“We are the party of other-minds realism”. Its platform is that other people (and animals) are not just bodies or numbers or behavers. They pullulate within.[1]

But it is a good question what the right conception of other minds should be in order for morality to be best served. What is the best way of thinking of other minds such that morality naturally follows? How do we make the best fit between the two? What is the best theory of other minds from a moral point of view? A natural first thought is that we must think of other minds as literally our own mind in someone else’s body; then we will pay it the proper respect. One can appreciate the motivation here, but it is surely misconceived: it is quite impossible to believe that your mind is literally possessed by everybody (unless we go for some kind of universal-mind view of reality). So, let’s relax this to require only similarity between one’s own mind and other minds. That’s better but is also far too strong: we don’t want to rule out moral concern for beings quite different from us psychologically (animals, children, aliens). This is why theorists retreat to such universal features as sentience or the capacity to suffer; but these tend to be too weak and don’t allow for gradations. I think we are in the presence of inherent vagueness, multi-dimensionality, borderline cases. Pragmatically, I think the best formula is something like this: regard others as reacting as you would react to what affects them (shades of the golden rule). If a certain kind of treatment would lead to pain and suffering in you, take seriously the possibility (or obvious actuality) that they will react in the same way and act accordingly. View them as reactively similar to you. If it would hurt you to have your foot stepped on, assume that it would hurt a dog or monkey to be similarly stepped on. Don’t think it’s just reflex behavior in them if it is more than that in you. The reason this is the best fit morally is simply that morality is largely about how others should be treated, so focus on reactions to treatment. Conceive other minds as centers of psychological reaction. Consider how they are affected by things. Not how they behave, save as signs of what goes on inside them. Consider the stimulus not the behavioral response. Think: “That would hurt!” And don’t underestimate the amount of hurt or its quality. Be a realist about psychological reaction: pain, trauma, sadness, depression, grief, suicidal tendencies—as well as pleasure, joy, and happiness. Morally speaking, the mind of the other is a center of value-laden reaction—affective effects. What are the effects of your actions on the affective life of others? This is what morality is all about, fundamentally. So, your conception of other minds needs to incorporate that dimension if it is to serve the purposes of morality. And the belief must be strong, unqualified, unbiased—you must really believe in the existence of other minds. You can ponder the other minds problem in your epistemology class, but don’t carry it into the market place—act promptly and decisively. Act as if there is no skeptical problem about other minds; for morality depends on it. We may regard this epistemic predicament as unfortunate—sometimes veryunfortunate (what kind of God would allow it?)—but that is the way it is, and the stakes are high. You don’t want to end up making a mistake about it and other minds are as real as your own. Think of it as a matter of faith if that helps. True, there is the risk that solipsism is true and all your moral sacrifices have been pointless (they were all zombies after all); but it is a much greater risk that other minds are real and you have spent your life doubting it and acting unethically. In practice this never happens, fortunately, but it’s good to be aware of the possibility. It does seem likely that many people are semi-skeptical (or ignorant) about other minds (or act that way), and not just the psychopathic narcissists; it takes a vivid imagination to recognize the full reality of minds other than one’s own. It would be salutary periodically to ask yourself, “Be honest now, do I really believe that other minds are as real as my own?” It can be convenient to underestimate this and act callously or insensitively, so it is wise (ethical) to be vigilant about it. When was the last time you treated someone as if he or she approximated to a zombie? Look deep into their eyes, feel their interiority—I guarantee you will be a better person for it.[2]

[1] Doesn’t it sometimes seem as if the human belief in other minds is only just strong enough to sustain a tolerable morality? If we believed it only a little less, morality would be in big trouble. After all, it is not as if the existence of other minds is easily demonstrable; and we all know how easy it is to make mistakes. You can’t tell simply by looking. Perhaps we are lucky we do as well as we do in forming this belief. Morality hangs by an epistemic thread.

[2] Do animals and young children have a properly formed sense of the existence of other minds, and hence satisfy a necessary condition for possessing a moral sense? Do adolescents? I rather doubt it: they just go by behavior without any thought of interior realities. Does the lion have any idea of what it is like for its prey to be held by its throat in the lion’s jaws? I doubt it. It probably never gives a thought to other minds and the impact of its actions. I suspect, too, that the individualistic self-advancement gung-ho culture of the United States erodes the natural human sense of the reality of other minds; but that is another story. It is why Americans are so easily taken in by simplistic psychological theories. Then there was slavery, a massive denial of the reality of other minds.

Share

“Baby, I Love You”

“Baby, I Love You”

The first thing I did after waking up from a twelve-hour operation on my neck was to vomit into a plastic bag (a normal reaction to all the anesthetic, I was told). I was semi-delirious on a gurney. The next thing I did was feebly sing the chorus to the song “Baby, I Love You” by the Ronettes[1] to my nurse Shannon, to whom I had just been introduced. I had been working on the song the previous week and it popped into my mind as I regained consciousness (along with my usual anger). They wheeled me up to the recovery room with an array of tubes poking out of me, including a catheter. I had the distinct feeling of not-being-dead. It was 9pm (I had arrived at the hospital at 6am with my girlfriend Morella). There was no way I was going to sleep that night, except fitfully. The song kept going through my head. I asked Shannon if she knew it and she played it on her phone. I was still memorizing the lyrics and would sing parts of the song to her throughout the night and the next day. It got me through the ordeal and she knew that. Whenever I sing that song now (this was two years ago) I think of that recovery room, and Shannon, and vomiting into a plastic bag.

[1] Amazingly, a cover of it was made by the Ramones, but the original is much better. Opening lines: “Have I ever told you, how good it feels to hold you? It isn’t easy to explain”.

Share

William James on Mind and Brain

William James on Mind and Brain

In chapter VI of The Principles of Psychology William James writes: “The ultimate of ultimate problems, of course, in the study of the relations of thought and brain, is to understand why and how such disparate things are connected at all. But before that problem is solved (if it ever is solved) there is a less ultimate problem that must first be settled” (177). This is finding the “minimal mental fact whose being reposes directly on a brain-fact”. He concludes the chapter with these words: “nature in her unfathomable designs has mixed us of clay and flame, of brain and mind, that the two things hang indubitably together and determine each other’s being, but how or why, no mortal may ever know” (182). To the modern-day mysterian such as myself these words have a remarkable prescience. First, James has hit upon the metaphor of a flame to capture the nature of consciousness, contrasting this with the clay of the brain—how does fire spring from clay? (My book is called The Mysterious Flame.) Second, he is more than willing to entertain the hypothesis that the mystery is irremediable: no “mortal” (aka human) may ever know, though a superior intellectual being might be able to resolve the mystery. If only he had seen that this carries no implications about the naturalness of the elusive connection! No compromise with ontological rationalism needs to be contemplated. No spirit, no soul, no divine intervention. Still, I am impressed.

Share

South Park

South Park

South Park has thrown down the gauntlet: expect more of this or come after us. I don’t see that MAGA has any easy options, so this could be one of the most potent political moves so far. I don’t know what will happen, but it is a serious challenge in today’s world.

Share