Language, Self, and Substance

Language, Self, and Substance

I will offer some sketchy remarks on meaning and the self in the light of the anti-substantialist view of the mind. First, there has to be something wrong with the Cogito as traditionally conceived, since the self (reference of “I”) is not a substance. We can’t say, “I think, therefore I exist as a substance”: the meaning of “I exist” cannot be that a certain substance exists, because nothing mental is a substance. The word “exists” here can’t be functioning as a predicate of a substance, and “I” can’t be a singular term denoting a substance; for then the sentence would be meaningless for lack of reference. What it does mean is obscure. Second, physical substances contribute to the meaning of sentences about physical objects, but so do non-substantial states of consciousness, since grasp of meaning implicates the ontology of consciousness. Meaning must be a combination of substance and non-substance, a peculiar hybrid. We are familiar with sense and reference; well, this is absence of substance and presence of substance. Meaning is going to be something rather special given its ontological underpinnings—a juxtaposition of external substance and internal lack of substance. It has an intelligible ontology and an unintelligible (to us) ontology at the same time. Third, sentences about the mind will find themselves in an awkward position: they are not about anything substantial, so their meaning cannot be fixed by the substance denoted. How then can they have meaning? Not in the way physical sentences do. They are about non-substance and also grasped by non-substance; so, they must be semantically quite different from sentences about substantial things. How can they have meaning? Must they have a pure use type of meaning, while sentences about physical things have a denotational substantial meaning? Fourth, the self cannot be a substance, so its nature and persistence through time cannot be like the nature and persistence of substances proper. Perhaps we need to look with more favor on etiolated psychological continuity theories, or admit complete bafflement. Nor can the word “I” function as a substance-denoting word: there is no such substance to be denoted; and it is unclear what else might be its denotation, if any. In sum, the elusive ontological status of the mind poses problems for standard theories. A semantics based on substance, such as our ordinary physical sentences demand, is inapplicable to psychological sentences, but nothing else suggests itself. Yet the sentences look very much the same. There is a real threat that psychological sentences can have no genuine truth-conditional meaning. A psychology without substances looks like a psychology that cannot be talked about. How can there be a science of such a thing? There can be no doubt that external substances play a formative role in the creation of meaning, but if the mind has no substantial ontology, it cannot play the same kind of role—so mental language ought not to be meaningful at all. This is a lot worse than indeterminacy of meaning, because now there are no (mental) rabbits to talk about, i.e., substance-like mental entities. Mental talk has no articulable subject-matter.[1]

[1] I feel paradigms shifting beneath my feet. Have we been complacently assuming a substance ontology for the mind in our theorizing about language and thought? What if we gave that up?

Share

Ontology of Mind

Ontology of Mind

What is the ontology of the physical world? What is its ontology and how do we conceive it? The best answer to this is that it is a substance ontology: the physical world consists of physical substances qualified by what are traditionally called accidents.[1] For example, animals, artifacts, and inanimate lumps—cats, tables, and chunks of gold. What are the marks of a substance in this sense? Substances are solid, cohesive, resistant, geometrical, extended, persistent, separate, self-subsistent, spatial, changeable, destructible, divisible, transmutable, located, causal, bearers of accidents, substrata of events and processes. You can see them, touch them, move around them, count them, and collect them. They are the routine objects of everyday life. The human body is a substance, as is the human brain. Basically, a physical substance is an extended thing in space instantiating a variety of properties. This is physical reality, and we conceive that reality by employing a conceptual scheme that recognizes substances and their attributes. It is familiar to the point of invisibility. It constitutes the ordinary non-mysterious world; we don’t look at a lump of coal, say, and think, “Wow, that is so mysterious!” It is, rather, the baseline from which we judge the mystery level of other things (numbers, values, universals, etc.). The physical world thus has a discrete segmented ontology and we view it in these terms: individual substances, kinds of substance, accidents of substance (attributes, properties), events occurring in substances, relations between substances. Our ontology of the physical is a substantialist ontology. It is intelligible and unmysterious (we are not amazed at the existence of physical substances). We are not inclined to infer the supernatural from the existence of substances. They are simply the form that matter takes—it clumps.

But what about the ontology of mind: is it a substance ontology? It doesn’t take much reflection to see that it is not. The mind is not on its face a physical substance; nor is it made of such substances; nor does it instantiate accidents in the manner of a substance. It presents substances in perception and thought, but it isn’t itself substantial; this ontological contrast is therefore evident within consciousness. Thus, the mind contrasts ontologically with matter; it isn’t the same old ontology located in certain organisms (it seems “queer”). Indeed, it appears radically opposed to such an ontology—it has a pronounced anti-substantialist character. Consciousness, in particular, is a substance-free zone. This, I think, is the key to its apparent mysteriousness: its ontology is obscure, elusive, and unique. We sense that it is not ontologically like other things, such as the body. Thus, it strikes us as mysterious—intrinsically, essentially. We might even want to say that it has no ontology, substantialist ontology being the only kind available. More cautiously, whatever its ontology is we have no conception of what it might be, since we are drilled in the ontology of substances and accidents. The only ontology we have, as theorists and ordinary folk, doesn’t apply to the mind, so we are bereft of an ontological framework for understanding the mind. We are ontologically blind with respect to consciousness. This leaves us in a state of bafflement about the nature of consciousness and the mind generally. We might try to force it into the substantialist framework, but this effort is doomed to failure (hence the many contortions and distortions of Western philosophy). We need to acknowledge that the mind draws an ontological blank; we can’t extend our basic ontological scheme to it. Substance ontology is all we have and it won’t cut it with respect to the mind. We are suffering from a bad case of ontological cognitive closure. Huge swathes of Western philosophy (and Eastern) have labored under this deficit (egos, homunculi, immortal souls, beetles in boxes, ghosts in machines, machines in machines, mental corpuscles, etc.).[2]

Symptoms of the disease have surfaced. Some have tried to preserve the framework by changing the subject matter—hence immaterial substance. But this has been half-hearted at best and faces well-worn objections. Such a putative substance is really nothing like its prototype in the physical realm; we just have a label for a limping analogy. Others have bitten the bullet and swallowed it whole: I am referring in particular to Sartre, Ryle, and Wittgenstein. Sartre views the conscious mind as ontologically nothing but pure nothingness. Ryle reduces it to hypotheticals about behavior. Wittgenstein goes public and expressivist. For them, there can be no real reality without substance, so they deny the reality of the mind (without saying as much). Still, they are responding to a genuine lacuna—we have no ontology of mind worthy of the name. Then there are the outright eliminativists about the anti-ontological mind. But the most popular move has been a quiet revamping of the substance ontology—eliminate such talk and replace it with talk of events and processes. True, there are no mental substances, material or immaterial, known or unknown, but there are still mental events—occurrences, happenings. So, we do have a viable ontology of mind—an event ontology. The trouble with this maneuver is that the event ontology we actually have relies on substances as vehicles of events: events occur insubstances, happen to them, presuppose them. Events without substances are not ontologically kosher; they are like accidents without substances to inhere in. We have no clear conception of substance-less events (how would they be individuated?). True, the mind undergoes changes, but to transfer event ontology from its original home in physical substances while leaving the substances behind is a hopeless project. There cannot be free-floating mental events. Nor can these be said to occur in the brain-as-substance, since we have no conception of how this is possible. We really possess no viable ontology of the mind when you get right down to the nuts and bolts. We just have words loosely used. Introspection does not disclose a mental substance in which mental events and processes occur; it is nothing like seeing a physical substance in space. All we have is a kind of stipulation about how we are going to use language to get a grip on mental ontology; but this has no epistemic foundation—it is just so much hopeful handwaving. We acquired our substance ontology from basic facts about biological evolution and perception, but it was never designed to accurately represent the ontological structure of the mind, so it signally fails to do so. The mind must have an ontology of some sort, since it clearly exists, but we are not privy to that ontology. This is not something that Western philosophy has ever come to grips with—hence the need for revision and re-invention. All the talk of souls, selves, immaterial spirits, and the like is a reflection of ontological ignorance, a vain attempt to keep our old substance ontology in place. Similarly for event ontology and process philosophy. And if we don’t even have an adequate ontological framework for the mind, we are unlikely to be able to resolve metaphysical questions about it. There really are no mental individuals or events or processes or states—or none that we can get our minds around. All this is just illegitimate employment of the substance ontology that applies so smoothly and naturally to the physical world. In fact, we have no workable idea of what the mind is, i.e., its ontological categories. The whole model of a unitary substance instantiating a plurality of properties breaks down and we have nothing to put in its place. We don’t know what kind of thing a sensation or thought or self is, except via rough and misleading analogies. The way we talk about the mind is thus strictly meaningless, because the ontological scheme that could make it meaningful does not carry over to the mind. Our language of the mind is a kind of inarticulate babble that we find useful for practical purposes. The sentence “I am in pain” is semantically really nothing like “This table has four legs”: substance ontology applies to the latter but not the former.[3]

[1] See Michael Ayers, Locke, for a careful exposition and defense of substance ontology. I will simply assume it in what follows.

[2] The ontology of “ideas” so prevalent in the history of philosophy might itself be a reflection of a presupposed substance ontology, this time at the corpuscular level. These are the smallest atoms of the mind, discrete persistent entities that combine to form larger wholes. But they are really nothing like physical atoms that bear the stamp of macro-substances: they are elusive, evanescent, not clearly discrete, and hard to pin down (where are they, how are they individuated, how do they cohere?). The mind as a receptacle of nuggets of mentality is hard to resist; the alternative is a kind of sea of indeterminate stuff (and even this image is too dependent on material paradigms). The ontology of mind is peculiarly ineffable. This is not surprising if our conceptual scheme is shaped from the bottom up by the substance ontology.

[3] I will say more about language and meaning in a later paper, given the anti-substantialist view of the mind. This will go along with a consideration of the self and “I”.

Share

What Makes Consciousness Mysterious?

What Makes Consciousness Mysterious?

Today it would be widely agreed that consciousness is mysterious, rather mysterious or extremely mysterious. It would not, however, be widely agreed what makes it mysterious—what precise characteristic confers the mystery. Some would say there is no mystery at all: consciousness is nothing but higher-order thought or self-ascription or not being asleep or activity in the reticular formation. I won’t discuss these views, as they strike most of us as non-starters. More promising, we have the ideas of intentionality, privacy, incorrigibility, and non-spatiality. These don’t seem inherently mysterious, however, though it may be that it is mysterious how the brain contrives to produce them. I don’t think many people look within, find these characteristics, and think, “Wow, that’s so mysterious!” So, granted that consciousness is mysterious, it is a bit of a mystery what makes it so—a mysterious mystery, as we might say. We can say what makes gravity a mystery, or dark matter, or the workings of black holes; but we find it difficult to identify the source of the felt mystery in the case of consciousness. It is certainly close at hand and not afraid to present itself, but it is obscure what makes it stand out as a mystery—how it differs from other natural phenomena in the mystery sweepstakes, particularly the brain.

Here is one popular answer: consciousness is subjective while other non-mysterious things are objective.[1] To be more specific, it has a peculiar epistemic property, viz. that it can only be known by beings that share its particular character. It is mysterious because it has this property (the brain doesn’t). We tacitly recognize that it is epistemically restricted in this way, so we deem it mysterious. It isn’t simply that it has the what-it’s-like property—why exactly is that a mystery? It’s that this property gives rise to the peculiar epistemic situation in which we find ourselves—knowing what it’s like to be human and not knowing what it’s like to be a bat. However, I don’t think this is plausible as an explanation of our sense of mystery with respect to consciousness. First, is this really how consciousness immediately strikes us when we sense its mystery? Isn’t it rather an ingenious (though correct) point about the epistemology of consciousness? The point might never have occurred to you (it takes some arguing for) even though you have a primitive sense of mystery about consciousness. It seems too surprising to be the explanation we are looking for. Second, what if we suffered from no such epistemic limitation—would we feel no mystery?  Suppose we happened to have a mechanism in our brain that reliably produced the knowledge in question (bats and all): would consciousness then seem devoid of mystery? Doubtful. Third, what if we developed an objective phenomenology that enabled us to comprehend any type of experience no matter how remote from our own? Again, would the sense of mystery then disappear? Would consciousness no longer seem like a thing set apart, a metaphysical oddity, a natural wonder? No, we don’t seem to have put our finger on what exactly gives rise to the feeling in question. Isn’t there something more intrinsic and irremediable about the mystery of consciousness? The epistemological point seems too extrinsic and contingent. So, the mystery of the mystery remains—we haven’t been able to specify what it is about consciousness that makes it so mysterious. And this is a problem, because then we are defenseless against the claim that there is really nothing mysterious going on—we have a false sense of mystery. We don’t understand what makes consciousness an especially intractable problem if we can’t say whence the impression of mystery arises. We might expect that answering this question will reveal something deep about consciousness and our conception of it. I will attempt to answer the question in the sequel.

[1] See Thomas Nagel, “What is it Like to be a Bat?”

Share

The Barbados Files

The Barbados Files

My trip to Barbados was not intended to be a “working holiday”. On the contrary, it was intended to be a non-working holiday (totally in vacanza). I made a point of not taking my computer with me just in case I felt tempted to write something. (I had a companion.) However, a couple of days before leaving Miami I had the beginnings of some ideas that it seemed worth writing down; so, I brought some paper and a pen with me just in case I needed to jot down a few further thoughts. To my surprise, these grew and multiplied and eventually I had to find a store that sold me more paper. Moreover, during the second week a new set of ideas began to sprout, sometimes even at the beach! So, I needed to make a note of these as well. I didn’t spend hours at a time writing this stuff down, but I was at it pretty much every day for an hour or so. I began to have a startling thought: I am re-inventing Western (and Eastern) philosophy. Oh no, please! Picture me on my last day there in the Barbados airport in a fast-food joint (Chefette) writing down my remaining ideas on metaphysics and epistemology before I boarded the plane home (my companion had departed earlier that day).

I say all this to announce an intention: in the coming weeks I will be writing out these new thoughts and publishing them here seriatim. I will do this in the usual form: short pieces stitched loosely together. There is a lot to get through. The first part will be on metaphysics; the second on epistemology. The Barbados files will become the Miami chronicles.

Share

An Argument Against Idealism

An Argument Against Idealism

Imagine we lived in a world in which idealism was the dominant philosophy (in fact, it is the actual world, but that’s another story). The prevailing doctrine is that everything that exists is mental, i.e., a state of consciousness. Not just the mind itself but also the so-called physical world—mountains, molecules, the brain. All these things consist of ideas in the mind, specifically sense experiences of certain kinds—for example, episodes of seeing the thing in question. To be a brain, say, is to be a sense experience as of a brain. Physical objects are really collections of sense-data, as the terminology has it. These sense-data may exist in the human mind or God’s mind or just in my mind (solipsistic idealism]. Physical objects are reducible to such mental entities; this is their essence, their mode of being. Everything has a mental nature.

But suppose there is opposition to such a doctrine: some people firmly believe, eccentrically, that some things are inherently not conscious states; their essence is to be not-conscious, mind-independent. Their battle-cry is, “The not-conscious world is what makes idealism metaphysically impossible”. They reject the claim that physical objects are reducible to sense experiences. The idealists wonder what else these objects could be: it would have to be something unknown, of an unfamiliar nature, a mysterious substance of some sort, subject to skepticism. It would have to be something there is nothing it is like to be, but what would that be? Everything we really know is like something—visual experiences, pains, etc. They have trouble getting their minds around this supposed non-mental reality: how can we even think of it? However, the anti-idealists have an argument that they believe can break this stalemate of intuitions and settle the question once and for all. It is quite an ingenious argument, centering on a conceptual claim. They point out that everything conscious is such that there is something it is like to be it: therefore, it is only possible to grasp a given type of mind by sharing it. You only know what it is like to be a human by having human experiences—it can’t be done from no “point of view”. It takes one to know one; you can’t (fully) know what is like to be a bat because you aren’t one. This property of consciousness they call subjectivity: you can only form a concept of another’s mental state if you are a subject similar to the other subjectively—if you are mental birds of a feather, so to speak. But this is not true of physical states: these you can grasp even if you don’t share them—you don’t need to be a mountain to know what a mountain is; you don’t need to share a bat’s brain to know what a bat’s brain is. In short, no point of view is built into physical concepts: they can be grasped from any point of view. You can do physics if you are human, Martian, or Vulcan—or any species with the requisite intelligence. You can do geometry even if you are not yourself similarly geometrical; you don’t need to be Euclidian in order to understand Euclidian geometry, say. The physical may be defined as whatever there is nothing it is like to be, so there is no like that you must share in order to grasp it. But then, it is not possible to reduce the physical to the mental, because these are concepts of a different order, denoting properties of different kinds. It would not be possible, say, to reduce the physical brain to sensations as of brains, because such sensations would embody a distinctive subjective quality graspable only by beings that have similar sensations; but that is not true of the brain itself, because that can be grasped by beings with arbitrarily different types of sensations. Thus, there must be more to reality than is contained in sensations of reality—the physical cannot be the mental. Physical things are essentially objective: they can be grasped from many points of view and therefore cannot be subjective in nature. Things there is nothing it’s like to be cannot be reduced to things there is something it’s like to be. Therefore, idealism must be false: it tries to explain the objective in terms of the subjective. It cannot be an accurate account of things that lack consciousness, precisely because it explains everything in terms ofconsciousness. The body cannot be explained in terms of the mind, on pain of subjectivizing the body.

Clearly, this argument parallels a familiar argument against reducing the mind to the body; it simply reverses that argument. If the familiar argument is valid, then so is this one. Thus, that argument defeats bothmaterialism and idealism in one fell swoop. It can be used against Hobbes but also against Berkeley. It can be used against physical anti-realism as much as against reductive physicalism. Basically, the argument is that a color-blind man cannot understand color vision but he can understand the brain science of color vision (and the rest of physics). For if he could not, the brain would not be a physical object, which it is. The objective cannot be reduced to the subjective, as the subjective cannot be reduced to the objective. The view from somewhere cannot be reduced to the view from nowhere, and the view from nowhere cannot be reduced to the view from somewhere.[1]

[1] Is this argument perhaps just a little bit too powerful?

Share

Barbados Trip

I am going to Barbados on August 2nd for two weeks and will not be posting anything during that time. So, my absence is not an indication  of some sort of disaster. I will, however, have access to comments.

Share

On “What is it like to be a Bat?”

On “What is it Like to be a Bat?”

Thomas Nagel’s great paper “What is it Like to be a Bat?” begins resoundingly enough: “Consciousness is what makes the mind-body problem really intractable. Perhaps this is why current discussions of the problem give it little attention or get it obviously wrong.” These words slip smoothly off the tongue and make an impression. They have certainly been influential. The paper itself can be credibly credited with beginning the current craze for consciousness studies (I am part of that craze myself). However, I have come to believe that they are massively misleading and in fact completely wrong (!). This doesn’t detract from the cogency of the main argument of the paper, since it can be otherwise formulated; but it does affect the correct interpretation of that argument and its relevance to consciousness. In short, I don’t think the argument has much to do with consciousness as such, however we choose to interpret that word. I am well aware that these are heretical statements, so I propose to take it slowly and carefully. We must pay strict attention to the language.

When Nagel says that consciousness is what makes the mind-body problem really intractable we must take him to be contrasting the attribute of consciousness with other possible candidates for centrality. What might these be? He doesn’t say, so we must fill in the gap; these other attributes of the mind are supposed less intractable. He doesn’t claim they are not intractable at all; he implies that they are not really intractable—notas intractable as consciousness. It is consciousness that provides the severest form of intractability. I think we may presume that what he has in mind, or would mention if pressed, are such attributes as intentionality, privacy, invisibility, privileged access, infallibility, freedom, indivisibility, and unity—anything philosophers have supposed to characterize the mind distinctively. It is the fact that the mind is conscious that really puts the cat among the pigeons—makes the mind-body problem deep and hard. Presumably, then, the unconscious is not among the things that make the problem especially intractable. There are, according to Nagel, a number of aspects of the mind that pose mind-body problems, some of them quite intractable, but none as intractable as consciousness: it is the nub, the heart, the nerve. Thus, we need to address it specifically: we need to ask what it is about consciousness that makes it so hellishly problematic. Then we will see the true magnitude of the mind-body problem, and also understand why materialists steer clear of mentioning it—for they see the extreme difficulties it poses for their program.

The first question we must ask is what Nagel means by “consciousness”. This is not at all obvious initially, because that word and its cognates have different uses and definitions. One might be forgiven for not knowing exactly what Nagel means by “consciousness” in his opening line. He does, however, quickly step in to clarify the matter by employing the phrase “what it is like”: to be conscious is for there to be something it is like for the creature in question (bat or human). As he later acknowledged, he got the phrase from Brian Farrell and Timothy Sprigge, though it is now strongly associated with him. It certainly has a ring and pungency to it (a meme waiting to happen). There are questions about whether all conscious states have the property in question,[1] but Nagel focuses on perceptual sensations in the body of the paper, so we need not be concerned about its general applicability—except to note that some instances of consciousness may not be problematic in virtue of having the designated property. At any rate, sensations have it paradigmatically. Whether what it’s likeness really serves to define consciousness in general is a moot question; what matters is that some mental states have it and raise questions discussed in the paper. It becomes the operative notion as the paper proceeds. We need not mention consciousness explicitly again; we can stick to the property picked out by the phrase in question. It turns out, then, that the issue concerns this property in relation to perceptual sensations, particularly those of bats when they echolocate. Whether this property is really necessary and sufficient for consciousness is beside the point; the problems will still arise even if it is neither (as some have contended). The arguments go through just as well without talking a firm stand on the issue, though it is heuristically helpful to link the notion to consciousness. But then the talk of consciousness is playing no integral role in the argument. This is good in a way, because we wouldn’t want those arguments to depend on a dubious definition of consciousness (as devotees of higher-order thought theories have suggested). We can detach the two questions. This is also helpful if we wish to extend the argument to unconscious mental states: for it seems questionable that unconscious mental states are perfectly tractable, or more tractable than consciousness. The underlying problem concerns sensations, conscious or unconscious; that’s what the paper is really about (it could have been called “Echolocation Sensations and the Mind-Body Problem”). It is what it’s likeness that is really intractable for materialists—what Nagel later calls subjectivity. For this property can only be grasped by people who share the property, unlike physical properties. But then talk of consciousness drops out—and so does consciousness itself if we reject the definition Nagel offers. The whole issue can be formulated without even using the word “consciousness” or any synonym. The crux of the problem, according to Nagel, is the epistemic subjectivity (self-centeredness) of the property of what it’s likeness; what this has to do with consciousness can be left open. You can be a complete skeptic about consciousness, an eliminativist, and still accept Nagel’s argument; or you could be a higher-order thought theorist; or a self-ascriptive speech act theorist; or an electrical brain wave theorist—you would still have to contend with the point that there is something it is like to be a sensation, and hence a self-centeredness to mental concepts not matched by physical concepts. To put it bluntly, the argument really has nothing to do with consciousness; the concept is not essential to it. The first sentence of the paper could have been “Sentience is what makes the mind-body problem really intractable”, given that the argument will concern perceptual sensations—though we would need to define “sentience” so as to include pre- or sub-conscious perceptions if they also can only be grasped by sharing the perceptions of the other.[2]

Why did contemporary theorists avoid talk of consciousness in their defenses of materialism? Is it because they sensed trouble lurking there, as Nagel suggests, or might it be that they found it hard to give a precise meaning to the word (is being awake that much of a problem?)? Perhaps they thought it was best understood as self-ascription of psychological predicates, in which case they felt they already had that under their belt. Maybe there was no evasiveness, just confidence that they had it tamed. The problems concerned the actual features of mental states (e.g., intentionality) not whether people were conscious of them. So, I doubt intellectual dishonesty was at the root of their neglect. They might indeed have been mightily impressed with Nagel’s basic argument, but just doubtful about what it had to do with consciousness as they understood it (“self-perception” or some such). The concept of consciousness plays no real role in Nagel’s argument and invites irrelevant objections (“I don’t agree with your definition of consciousness”). What is true is that what it’s likeness is linguistically awkward, perhaps conceptually awkward, so that we have no ready term to use when mounting arguments about the denoted property. Thus, we latch onto “consciousness” as a convenient shorthand; but the dictionary provides no such definition[3] of the term and it is vulnerable to attack for obscurity and indeterminacy of extension (do memories have it?). The whole topic is a linguistic mess, frankly. It therefore thrives on buzzwords and scare quotes. The issues are real but the language is sloppy and inadequate.

I have talked about the first two lines of Nagel’s classic paper, but what about the title? It too gives a misleading impression, though it is obviously catchy. Many people seem to think it is about the problem of other minds; it isn’t. It’s about the mind-body problem and the prospects for physical reduction. But we would certainly be within our rights to observe that it is not about what it’s like to be a bat, or for any animal to be the animal it is; it is about what certain sensations are like, and are. We know what it’s like to be a bat in many respects, but that isn’t the question; the question is whether we can grasp the concept of echolocation experiences given that we can’t echolocate. It isn’t about understanding other animals—a worthwhile endeavor—it’s about our concepts of experience and their dependence on our own specific “point of view”. I’m not criticizing Nagel for giving his paper the title he did—a title is just a title—but careless readers might easily get the wrong idea. Calling it “Can We Know What Echolocation Experiences are Like?” would be more accurate, if less memorable.

Have we been barking up the wrong tree in pursuing “consciousness studies”? It might be said that the points I have raised are merely verbal; we all know what we mean and our talk of consciousness can always be paraphrased away in terms of subjectivity and what it’s likeness. There is much truth in this sanguine assessment, but it is alarming that we have been carried away in recent years by sloppy language and lazy thinking (I include myself). The word “consciousness” has become a sexy meme, a type of profundity-signaling, a form of advertisement. Nagel’s words had more power than he knew.[4]

[1] Suppose I am conscious that I am late for a meeting: is there something it is like to be in this mental state that is common and peculiar to all instances of it? Won’t different memories and emotions go through the minds of different people in the same state? And is this what it is to be conscious you are late? How is what it’s like related to the propositional content of such a conscious state? The notion is infuriatingly vague and elusive when applied generally.

[2] Perhaps it would be better to say that there are several attributes of the mind that create difficult, even intractable, mind-body problems, drawn from the list I gave; consciousness is one of them. There are many mind-body problems not a single problem focused on consciousness (whatever we choose to mean by that word). Nagel put one neglected problem on the map (epistemic egocentricity, to give it a name), but then the map became too dominated by this problem, now grandly called the problem of consciousness.

[3] The OED gives “aware of and responding to one’s environment” for “conscious”; no mention of what it’s likeness. Clearly, there is no synonymy here.

[4] One often hears people talking about the wonderful new subject of “human consciousness”, as if psychology and philosophy have at last come to grips with what matters to all of us—our own lives and experiences. This is obviously completely wrong. It isn’t all warm and fuzzy and humanistic; it’s about whether animal experiences can be reduced to brain processes—a very old and quite dusty academic subject. The mind-body problem isn’t about getting yourself off the sofa to clean up the kitchen when you really don’t feel like it.

Share

Conscious, Unconscious, and What It’s Like

Conscious, Unconscious, and What It’s Like

What is the relationship between consciousness and what it’s like?  There are two questions: is the latter a necessary condition of the former, and is it sufficient? We could add a third question, which is really the central question: what is the real essence of consciousness—its intrinsic nature, its mode of being, what it is. It is sometimes said that what it’s like is not a necessary condition, since abstract thoughts don’t have it, or beliefs, or theoretical understanding; yet they are conscious. Less commonly, it has been argued that it is not sufficient, since unconscious mental states have it. The intuition is that what it’s likeness doesn’t logically entail consciousness. Here we must be careful about the word “conscious”: if it just means “feeling”, then surely what it’s likeness entails consciousness, since consciousness is a matter of how something feels. But if we mean something more like “being conscious of” or “consciously known”, then intuitions waver: why should we necessarily be conscious of or know about our feelings? Couldn’t there be something it is like to be in a certain unconscious state and yet the person not be conscious of this, not know about it. Sensations don’t logically require such higher-order states in order to exist. There seems no contradiction in the idea that unconscious mental states have subjectivity (“likeness”) but are not the objects of any conscious acts of knowing. Take the case of seeing a bunch of eight dots but not counting them: couldn’t it be true that the sensation is distinctively of eight dots and yet the perceiver doesn’t know this and is not consciously aware of it? The concepts seem to allow this degree of daylight between them: we can separate the concept of what it’s likeness from the concept of being conscious of—the former not entailing the latter. Also: couldn’t you be in a general state of depression and not be conscious of it? Isn’t the mind a two-level system—primitive awareness of things and a more sophisticated awareness of that awareness? Can’t there be subjectivity without consciousness? After all, there can be degrees of consciousness but not of subjectivity. An animal may be more or less conscious, but it can’t be more or less a being there is something it’s like to be. I can be in a state of being more or less conscious of what is going on around me (or within me), but it makes no sense to say that one sensation has more what it’s likeness in it than another. The terms have a different “logic”.

These are treacherous waters; it is easy fall into conceptual confusion. I want to advance the discussion by making a (relatively) clear point. Suppose I want to know the nature of a bat’s unconscious mental states as it echolocates; I already know I can’t know the nature of its conscious echolocation states. Can I know that? Reflection shows that I cannot know it in the unconscious case either: for the unconscious mind is not reducible to purely physical states of the bat’s brain, which I can know. Yet I might be reluctant to describe the bat’s perceptual unconscious using the phrase “what it’s like”, since I don’t think unconscious mental states are like anything. Then why do I think I can’t know the bat’s unconscious mind in this case? Because if it wereconscious, I wouldn’t be able to know it. It is of a type such that conscious expressions of it are not graspable by me. I am not supposing that it has what it’s likeness built into it when unconscious; I am supposing that if it were to become conscious, I would be unable to grasp its nature—as I am of the bat’s conscious echolocation experiences. I therefore think that if my inability so to grasp is an indication of irreducibility then the unconscious states are as irreducible as the conscious states. I think they are essentially of the same nature, with one conscious and the other unconscious. Thus, the bat’s perceptual unconscious is as much an obstacle to physicalism as its perceptual consciousness, so far as my reductive abilities are concerned—because I understand its unconscious via my understanding of its consciousness, which in this case is limited.

Perhaps the point will be clearer if I talk about Freud. You are told that the child has an Oedipus complex: he sexually desires his mother, but this desire is unconscious and always will be. How do you understand what is being said? On the face of it, unconscious desires are strange and incomprehensible things—you know what conscious desire is, but what is the unconscious kind? The answer is obvious: you know what the unconscious desire would be if it were conscious. You think, “The unconscious desire is just like the conscious expression of it (except for being unconscious), and I know what that is”. You use your knowledge of consciousness to form an idea of the unconscious (but not as conscious). But this means that the limitations of your knowledge of consciousness carry over to your knowledge of the unconscious—that is, you can’t know more about the contents of the unconscious than you can about the contents of consciousness. If a psychanalyst starts talking about the unconscious of Vulcans, you will get lost when she comes to describing unconscious mental states completely alien to you, precisely because of your own restricted consciousness. You are epistemically limited by your own (contingent) phenomenology in both cases—the conscious minds of others and their unconscious minds. This is a roundabout way of saying that you have a subjective (self-centered) conception of the unconscious. But that means that the unconscious is just as recalcitrant to physicalism as the conscious, neither more nor less. In other words, we don’t have to attribute what it’s likeness to the unconscious in order to derive the result that the unconscious is as problematic as the conscious. If the latter is mystery, then so is the former. If the unconscious were intrinsically endowed with what it’s likeness, then certainly it would pose the same explanatory problems as consciousness; but it doesn’t have to be so endowed in order for the same conclusion to follow. All we need is the assumption that we understand the unconscious via the conscious, by considering what the unconscious state would be like if it were conscious. And really, we have no other way—without this we would draw a complete blank. To repeat: I know what a particular unconscious state is by knowing what it would be like if it were conscious—it would be like this. If I couldn’t form this thought, I wouldn’t know what Freud was even talking about.

This is quite a strong result, because it tells us that consciousness is not uniquely problematic among the denizens of the mind. The unconscious shares the enigmatic character of consciousness as conceptually perspective-dependent: only someone mentally similar to the other can form the requisite concepts. It also underscores the point that we are conceptually impoverished with respect to the unconscious: we really have no conception of its intrinsic nature save by invoking our grasp of consciousness (ironically enough). So, our grasp of it is doubly limited: it is parochial and it is extrinsic (indirect, superficial). In the case of the bat, we can’t grasp the experiential type and we can’t grasp what kind of thing an unconscious mental state of that type (or any type) is. We are limited in the former way with respect to conscious experiences, but at least we have knowledge of the nature of consciousness itself (we can feel it inside us). We have a single mystery for the bat’s conscious experience but a double mystery for its unconscious experience (if we allow this word for unconscious perceptual states). Realism about the unconscious implies mystery as much as realism about the conscious does, even more so. This stands in contrast to anti-realist positions about the unconscious—that it is simply the physical brain, or mere dispositions to conscious states, or just a useful fiction. In particular, we don’t need to claim that the unconscious has what it’s likeness in order to argue that it has the problems ofwhat it’s likeness. The epistemology is much the same either way. We are not cognitively better off trying to understand the unconscious than trying to understand the conscious; in fact, we are worse off.[1]

[1] This paper goes with my “An Even Harder Problem”. I am aware that these are intricate and taxing questions that strain comprehension.

Share