Skepticism and Self-Knowledge

 

 

Skepticism and Self-Knowledge

 

 

From the point of view of skepticism, self-knowledge is an anomaly. How is it that facts about a person’s psychology can be known with certainty when everything else is uncertain? It’s not as if first-person ascriptions of mental states are analytic or a priori; they report the same facts as give rise to the skeptical problem of other minds. We might think of this as the puzzle of first-person infallibility: how come we can’t be wrong about our own inner lives when we can be wrong about everything else? Of course, not everyone has accepted that we are thus infallible, even about whether one is in pain, but I think they fail to appreciate the force of the anti-skeptical position in this area; in any case, I will not attempt here to parry these assaults on introspective certainty. Instead I will offer a mixed position that accepts the standard claim of infallibility but finds one place at which the infallibility breaks down—so that such beliefs are not completely infallible, though they are largely so. General skepticism does not then run into an outright counterexample: all (empirical) knowledge is fallible one way or another.

So suppose I am in considerable pain as a result of stubbing my toe. I judge this to be so, uttering the words, “That hurts!” Could I be wrong? There are two places at which possible error might be supposed to creep in: with respect to the type of sensation I am feeling, and with respect to who has that sensation. Could I really be feeling another sensation altogether and misidentify it as pain—say, a pleasant sensation or a sensation of red? I could of course use the wrong word to describe my sensation (perhaps my English is shaky or I have some sort of language pathology), but that doesn’t mean that I misapply the concept of pain. I would agree with those who insist that no such thing is possible: I couldn’t be feeling a sensation of pleasure or a sensation of red and mistakenly suppose it to be a sensation of pain. I can’t misidentify my sensations in this way: I know with certainty what kind of sensation I am having—I feel that sensation directly and I apply the correct concept to it. What about the identity of the person feeling pain—could I misidentify him? I know that somebody is in pain and I mistakenly take that person to be me (actually it’s the person sitting opposite me). That again looks like a rank impossibility: I can’t judge that someone is in pain and wrongly suppose that I am that person. If I judge that I am in pain, I must be right about the identity of the sufferer: I know for sure that it is I who is in pain. Thus I cannot be informed that I am in pain once I have judged that someone is: I know it’s me just by knowing my own pain, unlike knowing (say) that someone in this room has won the lottery and being informed that it’s me. The pain presents itself as mine when it is mine. It is as if the pain has a little sticker on it saying, “This belongs to you”. Taking these points together, then, there seems no room for error in the self-ascription of pain: both possible points at which error might occur are blocked, so I know infallibly that I am in pain. I could never believe that I am in pain when I am really seeing red, or that I am in pain when it is really the person opposite me. By contrast, I could believe that my house is on fire when really it is merely well illuminated, or when it’s the next-door neighbor’s house that is in flames not mine.

So far, then, the case of self-knowledge is not parallel to the case of knowledge of the external world (or other minds). But there is one point of potential fallibility that we have not explored—the time of the pain. For I do not merely judge that I am pain at some time but at this time—I judge that I am in pain now. Could I be wrong about that? Could I misidentify the time of the pain? In particular, could a past pain be mistakenly attributed to the present time? Philosophical astronomers are fond of pointing out that the light reaching us from distant galaxies might have originated from a source that is now very different from what it was when the light departed from it, and might not now even exist. We have the impression that the object of sight is now as it seems to be, but that may not be so—and it is often not so. If we judge that that object is now thus and so, we make a mistake; rather, it wasthus and so. There is a time lag between when the light started out and when it reaches our eye, and this time lag can produce erroneous temporal belief. For instance, what looks like a contemporaneous stellar explosion could have occurred long ago. Taking a cue from this case, let us imagine a mind that is spread out across the universe at a scale of billions of miles: the brain that serves this mind is distributed as widely as galaxies are. In one part of it sensations of pain are processed, while in another part beliefs are processed, specifically beliefs about pain. Suppose it takes many years for signals from one part of this distributed brain to be received by the other part. Then the person whose brain this is may judge that he is in pain now when in fact the pain ceased long ago, say two centuries ago. He is quite correct to suppose that pain occurred in him but he gets the time of its occurrence wrong (we may suppose that he doesn’t know about the time lag). He thinks he is in pain now but in fact he isn’t. He makes temporal errors in his self-ascriptions of pain (as well as beliefs, emotions, etc.).

Now the skeptic sees his opportunity: couldn’t the human brain be subject to a similar time lag? In fact it is: it takes time for nerve signals to travel from the pain centers of the brain to its cognitive centers (located in the frontal lobes). It may be that by the time you judge that you are in pain the pain has receded, though no doubt you are usually still in pain at the time that you judge you are. The skeptic will then conjure the possibility, based in neurophysiological fact, that in any case of self-ascription of pain it is possible that the pain has ceased by the time you judge it to exist. You judge that it exists now, simultaneously with the judgment, whereas in fact it existed at a previous time, possibly a few milliseconds earlier or even months ago (maybe your brain is extremely slow). Perhaps you have been hooked up by a super scientist so that there is a time lag of five minutes between having a mental state and judging that you have it, so that all your self-ascriptions are false with respect to time (assuming the mental states don’t persist till the moment you make the judgment). Thus self-ascriptions of mental states are fallible, we might now be systematically making mistakes about the time of our mental states, and skepticism extends to self-knowledge (at least in this one respect). Nothing is therefore completely immune from skepticism; error is always possible (with regard to the empirical world anyway).

The reason that error is always possible is that fact and belief do not necessarily coincide. It is thus possible to believe that something is happening now that is not happening at that time: I might think that a certain lecture is happening now but in fact it happened an hour ago; likewise I can think that a pain is happening now but in fact it happened a second ago. There is the fact on the one hand and there is the judgment about it on the other. This division is as true for mental states and self-ascriptions of them as it is for anything else, so it is not surprising that error is possible here too; what is surprising is that it is confined to the temporal content of such judgments, having no parallel with respect to the identity of the state ascribed and who it is ascribed to. I can’t be wrong about what state I am in and about it being I that is in it—but I can be wrong about when the state occurs. I know for sure there has been pain in my life, but exactly when is subject to skeptical doubt. Maybe all my pain happened years ago and I am only now getting round to self-ascribing it: it just takes that long for the fact to make itself known to my introspective faculty. And the same is true of it seeming to me that I am in pain: this too could precede the act of judging that it so seems to me. The neural correlates of pain actually do precede the neural correlates of judging that you are in pain, granted that the relevant signals have to travel across the cortex in a finite time, so there is always a bit of a lag time between fact and report.[1] The skeptic simply amplifies this point to construct fanciful scenarios that dramatize the gap between fact and judgment. So we should add to the usual brain-in-a-vat case what might be called the brain-in-a-time-lag case.

 

Co

[1] It is of course equally true that there is a lag between the time a stimulus leaves an external object and the time we have a perception of that object, which is particularly noticeable in the case of sound. This implies the possibility of error about the time of external events, as when you think that thunder (the physical phenomenon) occurs at the same time you hear it. Everything outside us thus happens slightly earlier than when we perceive it and make the corresponding judgment. It turns out that the same is true of events in one’s mind and judgments about them, which opens the case up to skepticism.

Share

Roots of Skepticism

 

 

 

Roots of Skepticism

 

 

The roots of skepticism run deep; they are not merely the invention of insecure philosophers. Perhaps the most primitive form of skepticism begins from a simple thought: belief is not the same as fact. Beliefs are in the mind; facts are in the world. Thus beliefs and facts can vary independently of each other: unknown facts, non-factual beliefs. Beliefs don’t necessarily hook onto facts, since they can be false; and facts sometimes decline to reveal themselves. So there is a gap between belief and fact, and in this gap the skeptic sees the possibility of error and hence lack of knowledge. It would be different if belief and fact were identical: then there would be no gap and fact would follow from belief. Knowledge would be guaranteed. But as it is, there is no such convergence of belief and fact, so skepticism is possible. Call this “non-convergence”: we can then say that one root of skepticism is non-convergence; and it is clearly a basic property of belief in relation to fact. Skeptical scenarios such as the brain in a vat or the evil demon are predicated upon it; it is what allows for these possibilities (as an identity theory of belief and fact would not). It is as if knowledge craves identity with fact, but cannot achieve this identity, so skepticism gains purchase. The skeptic gazes out at the world and reflects that his state of mind is not identical to the fact he takes himself to know, so the assumption of knowing that fact is not well founded.[1] Notions like acquaintancehave been invoked to bridge the gap, but acquaintance is a relation between subject and object and hence presupposes non-identity; it cannot secure what knowledge apparently requires. So at least the skeptic suggests. Only identity would secure knowledge (the real McCoy), but that is an absurd theory of belief: belief and fact are distinct existences.

But there is another property of belief that is equally potent in triggering skeptical thoughts. This property is best appreciated in the wider context of the mind’s representational powers—what we might call the creativity of the mind. The mind is not a passive reflector of what comes in through the senses, faithfully mirroring what reality offers up. It creates mental representations: perceptions, dreams, illusions, hallucinations, imaginary objects, counterfactuals, nonsensical sentences, and erroneous beliefs. We might even say that it creates worlds, as with the brain in a vat or (more familiarly) dreams. It has the active power to generate representations that don’t correspond to facts—visual illusions being the most striking case. All of this has been fodder for skepticism: how can we rule out the possibility that the mind is always doing this? What is not often noted is that these skeptical thoughts trade on the fact that the mind has a certain sort of power or capacity: it is able to generate representations that fail to align with facts. Consider a mind that lacks such a power: it only stirs itself when facts heave into view. There are no dreams, no illusions, no imagination, and no false belief. Such a mind would not naturally lend itself to skeptical doubts: it would never say, “What if it’s all a dream?” or, “Maybe these perceptions are actually illusions”—for there are no such possibilities for this type of mind. This mind is not susceptible to skeptical worries simply because it never confuses fiction with fact or suffers a perceptual illusion or dreams a dream: it simply records what is actually present. It lacks the creative power necessary to produce the kinds of skeptical possibilities the skeptic is so impressed by. The human mind, by contrast, has this power in abundance; and it is this power that makes skepticism possible and natural for us. It is because the human mind is so creatively powerful that skepticism is so pressing for us. Skepticism arises from the strength of the mind not from its weakness. So we can add this property to the property of non-convergence as a source of skeptical sentiment. And clearly it too is a fundamental feature of the mind, as we know it to be. Skepticism thus has deep roots in the defining properties of the human mind (though not perhaps in all minds): we are natural skeptics because we know that our minds are such as to make skepticism possible.

I just described a possible mind in which skepticism can find no cause in creativity, which included the idea that such a mind has no false beliefs (possibly as a matter of natural law). But on reflection this thought experiment is not so obviously possible. What if I had claimed to describe a possible language in which no false sentences can be formed? Doesn’t the creativity of language guarantee that false sentences can be formed (they are well-formed and follow from the rules of grammar)? Similarly, if a person is capable of beliefs, isn’t she capable of having false beliefs? She just has to combine concepts in such a way that a false proposition results. The possibility of error is built into the ability to believe: to have beliefs about the empirical world is to have a mental attribute that makes error possible.[2] That is precisely why we have so many false beliefs. But if error is possible, then knowledge is not possible, according to the skeptic. So what makes belief possible is what makes knowledge impossible! Belief is only possible if error is possible (by virtue of creativity), but if error is possible then knowledge is not possible (by the skeptical argument): so belief can never amount to knowledge by its very nature. A pre-condition for the existence of belief is the possibility of error, but the possibility of error rules out knowledge, so belief by necessity rules out knowledge. That gives skepticism very deep roots indeed; it follows from the essential nature of belief, as a manifestation of human creativity. The creative mind is necessarily haunted by the specter of skepticism, and belief is necessarily creative.[3]

Now add to this the point that belief is a claim to know: when you believe a proposition you take yourself to know it. So thinking that you know is possible only if you can’t know! You can only think you know that p if you don’t know that p. The condition of thinking you know rules out the possibility of knowing: for thinking you know is an exercise of creativity, which implies the possibility of error, which implies the impossibility of knowledge. It is in the very nature of thinking you know that you don’t know (granted that the skeptic is right about knowledge requiring the impossibility of error). Thus belief can never amount to knowledge as a matter of its very constitution: it must contain the possibility of error, but then the skeptic seizes on that to question all claims to knowledge. I now know a priori that I cannot know, because I know that my beliefs arise from a creative faculty that permits the possibility of error, which rules out knowledge. I know that false beliefs are empirically common, and that I probably harbor many such beliefs; but I also know that there is no remedy for this situation–my belief system is so constructed that false beliefs can be generated by it. The system is a combinatorial generative faculty, like the language faculty, so the possibility of misrepresentation is built into it from the start. If (per impossibile) a belief system could restrict its outputs to those that correspond to facts, then this system would not be susceptible to the usual skeptical arguments; but that is a fantasy belief system, given that beliefs are combinatorial productions. It is like supposing that a dream system could be restricted to dreaming only what exists in reality.

Let’s assume the skeptical argument is correct: the possibility of error rules out knowledge. Then we can derive what deserves to be called a paradox of knowledge, namely that the conditions for the existence of knowledge make knowledge impossible. The conditions include the formation of beliefs (claims to know), but those conditions essentially involve the possibility of error, which is incompatible with knowledge. Knowledge is thus possible only if it is not possible! The skeptic has shown (granted his argument) not just that knowledge is impossible but also that it is in a certain way contradictory, for it requires an absence of error that is incompatible with its nature as a representational system. To know that p is for there to be no possibility of error regarding p, but that would imply that a necessary condition for being a belief system is not satisfied, which rules out the possibility of knowledge. Once a belief system exists the possibility of error exists, but that precludes those beliefs (claims to knowledge) from counting as knowledge. Skepticism is built into the very nature of belief as an instance of representational creativity. Creativity precludes knowledge (granting the connection between knowing and the impossibility of error). We could even say that concepts make knowledge impossible, because concepts are the things that make false propositions possible (just as words make false sentences possible). Concepts lead to the possibility of false beliefs, but that possibility rules out knowledge (as we are supposing). Suppose, for example, that I believe the table in front of me to be brown: I can reflect that this belief might be false, since my belief system might have constructed the wrong combination of concepts; but then I can’t be said to have knowledge that the table is brown. The point is that concepts in their nature can combine to create false beliefs, so skepticism has roots in the nature of concepts. Knowledge is therefore not possible for creatures that think in concepts—but what other way is there to think? We feel the pull of skepticism because we see that it has its roots in the nature of thought itself. The paradox of knowledge is that what allows us to seek knowledge prevents us possessing it—the structure of our cognitive faculty. The only way to vanquish skepticism is to do away with our cognitive faculty—but then we are left with nothing to know with.

The situation can be compared with perception. Our visual system purports to give us direct access to material objects, but according to the sense datum theorist it does not, but rather gives us direct access only to our own sense data. So we don’t really perceive what we naively think we perceive. But the point is not just that we happen not to perceive material objects directly, but rather that it’s built into the nature of perception that we don’t: for perception yields experiences and these are what is directly perceived (allegedly). The roots of the sense datum theory are thus claimed to lie in the very nature of sensory experience and are not dispensable—no one could see the world directly. Yet our experience leads us to believe that naïve realism is true. So there is something paradoxical about sensory experience: on the one hand, it presents itself as the direct perception of external objects; but on the other, its very structure contradicts that impression. It purports to do what its own nature precludes it from doing. So for the sense datum theorist naïve realism is not just a false theory of perception but also necessarily false, and reveals perception as paradoxical. Experience represents itself as doing what its own nature makes impossible. It endorses naive realism while also being incompatible with it. Similarly, our belief system represents itself as containing knowledge, but in its very nature it makes knowledge impossible. Experience makes us think we see the world directly, but at the same time its nature rules that out (according to the sense datum theorist); belief makes us think we have knowledge, but at the same time its nature rules that out (according to the skeptic). So the problems alleged by these doctrines go deep into the nature of the mind: the mind is telling us we can have states (direct perception, knowledge) that we cannot in fact have, given the way the mind is constituted. The nature of mind excludes its own assertions about itself.

I am not saying that skepticism is true, or that the sense datum theory is true; I am merely pointing out that both doctrines, rightly understood, go deeper than is generally supposed. First, they are necessity claims; second, they detect paradox in our ordinary concepts of knowledge and perception. In the case of knowledge, our concept requires something that is manifestly impossible, namely belief that is incapable of combining concepts in ways that fail to match reality. For once we accept that the mind has the power to create false beliefs we are well on the way to skepticism. This (combined with non-convergence) is what fundamentally leads to skepticism: belief is just not cut out to constitute knowledge.[4]

 

Colin M

[1] I discuss skepticism in relation to knowledge in this paper, but it would also be possible to frame the discussion using the notion of certainty or justified belief or other epistemic notions. Questions about whether knowledge really requires ruling out the possibility of error will not detain me, since I am concerned with what leads to skepticism in general; focusing on knowledge simplifies the discussion.

[2] I say “about the empirical world” so as to make a possible exception for introspective beliefs. Here it may be said that the combinatory power of concepts does not lead to the possibility of error, since I can know infallibly that I am in pain and yet my introspective belief is composed of a structured proposition. That would not contradict the claim that beliefs about the material world must be capable of error, but it might seem to limit the scope of the error thesis. This is a difficult question that I will not pursue here, but let me observe that there is a real tension between the idea of introspective certainty and acceptance of the combinatorial nature of belief: for how could a belief be incapable of falsity if its content is composed of combinable concepts? Shouldn’t skepticism carry over to so-called introspective knowledge? Not for nothing did Wittgenstein doubt that such statements are really propositional.

[3] It should be clear that by “creative” I don’t mean the kind of creativity we find in the arts and sciences but simply the kind of creativity inherent in the language faculty—what is sometimes called “productivity”.

[4] We might reasonably conclude that (some) animal knowledge is more immune to skepticism than human knowledge, on the assumption that animal knowledge is not always conceptual or belief-based. Certainly animals are not as troubled by skepticism as we are (though capable of error).

Share

Evolution and the Self

 

Evolution and the Self

 

 

We must assume that the self evolved, given that it exists and is not a social construct. That means it arose by mutation and natural selection, serving some biological purpose. And not just in humans but also across the animal kingdom: all those animal selves are biological adaptations, like limbs and brains and senses. The reference of “I” is a biological entity—its characteristics are biologically adaptive. An organism without a self is at a reproductive disadvantage compared to one with a self (other things being equal). The self does something valuable, survival-enhancing. One of its characteristics is unity (or at least felt unity), so this unity must have a biological function—it must make the organism better at propagating its genes (having offspring). There must be a “gene for self unity”, as there is a gene for vision or language or sexual desire (many such genes presumably). The architecture of the self must be connected to its functionality, just as in the case of the body. This doesn’t mean that the self is reducible to the genes or anything else of a material nature; it just means that the self is biologically functional.

But what job does it do—how does it aid survival? We are now accustomed to the idea of the modularity of mind, according to which the mind consists of an ensemble of separate faculties, each with its own inner structure and function. This is the orthodox ontology in the study of the mind: each module is an evolved organ, to some degree independent of the others, though interacting with them—a suite of innate capacities much like the organs of the body. This picture has fuelled some skepticism about the ontological status of the self: what need is there for a self in addition to the several components that compose the mind? Is it just a pre-scientific holdover from common sense, dispensable from the science of the human organism? What job could the self perform that is not performed by the modules we already have on board? Why not go Humean about the self and regard it as at best a misleading way to talk about psychological modules? There are the modules and there is the set of them, with nothing else needed. But this position flies in the face of a stubborn conviction that the self is real—we feel it inside us, we refer to it with “I”, and ethics is built around it. We should at least inquire if there is some adaptive purpose that it plausibly serves.

Actually I think the answer to this question is not far to seek: the self serves an integrative function. The self imposes unity on what would otherwise be disorganized plurality. The modules by themselves are a mere motley, separate departments or agencies of the mind, with distinct (and sometimes competing) voices. They need to be brought harmoniously together: but interaction is not enough—we need unification. The self is a superordinate entity that creates this unity. Let me put the point in terms of the brain: each module corresponds to a brain region (possibly widely distributed) that interacts with other brain regions; but in addition to these there is a further brain region corresponding to the unifying self—the self-center, as we may call it. It has its own identity and is not just the sum of the several brain regions corresponding to the mental modules. Just so, the self is a real entity apart from the modules it integrates, but it exists because of its integrative function: it evolved as a distinct “organ” so as to perform the job of unifying the separate modules that make up what we call the mind. For example, the visual and speech centers are connected—this is why we can report on what we see—and the self is the entity responsible for creating that connection. Hence the perceived unity of the self: its job is to create order from chaos—to unite the several voices belonging to the modules. Without it the mind would be a cacophony of voices, an unruly choir, instead of the unified entity we know it to be. If we imagine the mind evolving one module at a time, starting with a single module, we can see that there is a problem of integration to be solved: the self is the solution to that problem, and in due course it evolved. The self is the means by which the senses (etc.) come together. Thus it is a mental faculty in its own right—a distinct component of the organism. The human organism has a heart, kidneys, a brain, vision, touch, thought—and a self. These are all adaptations occasioned by the usual biological pressures. The self has an adaptive biology just like the heart, which is not to say that it doesn’t differ from the heart in important respects (there is no need to say that both are “physical”). It is not epiphenomenal or fictitious or a mere by-product of something else: it has its own biological rationale. Biology needs to expand to include it (it is one aspect of the biological adaptation we call the mind). We may assume that the self made its entrance long ago, well before humans ever came on the scene; for its unifying powers were needed just as soon as organisms developed multiple modules that needed integration. Of course, selves became more sophisticated over time, as the mind became more thickly populated with faculties, but the basic element is presumably very ancient, no doubt evolving in the seas.[1] Then organisms could sense their own unity instead of being just a collection of autonomous modules. Selves piggybacked on modules, but they are something over and above modules; they certainly did not evolve before modules, since modules are their raison de’tre. They are how modules found their groove (the “groove theory” of the self).

It may be objected that there has to be something wrong with this picture, since modules contain mental states and mental states require subjects. If there is a pain module, it needs a subject for the pains to occur in—there is no subject-less pain. Experiences need bearers. But then the self could not have evolved subsequently to the modules and their contents: it was already present in the mere fact of experience. The point should be conceded, but it doesn’t refute the scheme I have presented, though it does call for a necessary distinction. Let us agree that the modules contain states that logically require bearers; that doesn’t imply that the self as it now exists doesn’t have an integrative function. The correct position is that the self has two basic ingredients: the primitive ingredient included in all conscious states (notice that this doesn’t imply a single self for each organism), and the higher-order integrative self that constitutes the reference of “I”. The structure of the self is thus two-tiered: the first tier as ancient as the first experiences to evolve, the second coeval with the onset of module integration. Your visual experience now has a subject-place logically built into it, but it also figures in the integrative actions of the self that unites it with other mental elements. We might call these the “subject self” and the “integrative self”. Both are evolved features of organisms, and both are presumably very old. Evolution builds on pre-existing structures, exploiting and repurposing them, and the self is no exception; it is a kind of tinkered together construction, made of primitive consciousness and higher-order integration. First, we have individual modules and the primitive subject; second, we have collections of modules and their unification into a single self. As is typical in evolution, there are no clear lines or unprecedented breakthroughs; everything is gradual accumulation and happy accident. The biological self evolved over eons, proceeding from earlier traits and driven by adventitious demands.

Animal selves are not all alike: it depends on the integrative needs of the organism in question. The bat must integrate its sonar experiences with its sense of touch, and both with its cognitive capacities. Thus we don’t know the nature of bat experiences and we don’t know what it’s like to be a bat self, since our selves don’t perform this kind of integration. Other animals don’t know what it’s like to be a human self, since our selves perform linguistic integration and theirs don’t. Integrative acts are common to different species, but not the items integrated. We know what module integration in general is like, but not the specific character of the integration an experientially alien species performs. What we would find really difficult to comprehend is a mind that lacked integration altogether—a purely modularized mind. Our minds are unified by our selves, but some possible minds may not be so unified; and this fragmentation is alien to us. We don’t know what it’s like to be a fragmented mind. Is it like having many selves or no self? Is there even a viable conception of self for such a mind? A new version of the problem of other minds would be this: How do I know that other minds are unified by a self as mine is? Maybe other people have fragmented modular minds with no unifying self to hold it all together (though their behavior may exhibit signs of unity). In some species there may be no single unitary self but several sub-selves (e.g. the octopus)

None of this is intended to solve the philosophical problem of the self, i.e. what the nature of the self is. The account is purely a theory of the biological function of the self—why it exists from an evolutionary point of view. It is easy to come to the conclusion that the self is biologically pointless–a remnant of the old notion of the soul perhaps–but the integration theory provides a rationale for the existence of the self as a product of evolution. How the integration works, what the nature of the self is, how it is realized in the brain—these are separate questions. But it may be helpful to start from a theory of what the self is designed to do, considered biologically. The existence of selves, in our species and others, is surely a biological fact, so we would expect there to be some biological function that selves serve. If consciousness helps us know the world, the conscious self helps us knit this knowledge together. It acts as a kind of counterweight to module plurality.[2]

 

[1] To some extent we are all piscine selves, originally designed to integrate information from a watery world, joined with fishy feelings. Simple selves (actually not so simple) peek out from behind aquatic eyes, the ancestors of our dry-land selves. The fish self is the prototype of us all.

[2] Notice that in a classic case of modular divergence like visual illusion it is the same self that entertains both mental representations–there are not two selves corresponding to the two representations. We don’t have one self that sees the lines as of unequal length in the Muller-Lyer illusion and another self that insists that the lines are equal; instead one self brings both representations together. I see the lines as unequal and I (the same thing) believe them to be equal. The modules pull apart, but the self holds them together. The self operates as a device of module convergence. Are there any psychopathologies in which modular divergence is experienced as a splitting of the self?

Share

Bike

To:David McGinn Details Slideshow
IMG_0244.JPEG (204 KB)

Sent from my iPhone

1 Attached Images
Share

Ethics and the Self

 

 

Ethics and the Self

 

The self is perhaps the most elusive subject in philosophy. It seems impossible to say what the self is. Doubts about its existence are perfectly understandable, if exaggerated. The self seems intensely real, but its nature remains opaque. This is why all the standard theories are wide of the mark: the body, the brain, a series of mental connections, a transcendental ego. We are convinced of its unity, but the basis for this unity is unclear. It seems bound up with consciousness, but it is not an object of conscious awareness (as Hume noted). The self is a conundrum locked in a mystery surrounded by an enigma. Yet it is what we care about most, the thing about which we are most anxious, the entity most dear to us. Our life centers around something we find baffling. Every time you go to the supermarket you are surrounded by a phalanx of these baffling beings, each focused intently on itself, peeping out from behind anxious eyes. We don’t even have adequate language for them, these secret unities: we give them proper names, we apply pronouns to them, we call them “persons” (or just “people”), we invent fancy terms for them (“selves”, “souls”, “subjects of consciousness”); but none of these labels really provides anything by way of illumination—they are just words for we-know-not-what. Selves are a magical mystery tour (John, Paul, George, and Ringo—who are you really?).

None of this uncertainty might matter much except for one thing: selves are central to ethics. Ethics is about the right way to treat selves (animal as well as human)—whatever selves may be. What we owe to each other we owe to other selves. Selves are what suffer or prosper; they are what we have duties towards; they are what we make contracts with. Yet they make no explicitly articulated appearance in standard ethical theories: no account of their nature underpins the prescriptions offered. This is most obvious in the case of utilitarianism. We are told to maximize utility, utility being a mental state: but it is a mental state of persons (selves, subjects, sentient beings)—what is maximized is the wellbeing of persons. There is no such thing as maximizing wellbeing in the absence of selves—like maximizing the amount of grain in the world. We are trying to make people happy. We are trying to create facts of which people are essential constituents. But we don’t really know what people are. So the theory rests on an epistemic abyss. If we knew what selves are, we might appreciate better the soundness of the utilitarian position—we might see why the happiness of persons is so important. Or conceivably we might come to doubt that importance. As things stand, however, we are asserting the ethical centrality of personal happiness without having much idea what it is that is happy or unhappy. If selves are particles of the divine spirit that fact might be morally relevant, while if they are actually non-existent fictions that might be relevant too (how can it matter whether persons are happy if there are no persons?). There is thus a lacuna at the heart of the utilitarian theory. That might be acceptable from a practical point of view—we are ignorant of many things yet we get on with life regardless—but theoretically it is far from satisfactory. Wouldn’t it be better if ethics were not predicated on an enigma?

Kant’s ethics is instructive because of the explicit reference to persons. We are told to have “respect for persons”—autonomous rational beings, supposedly. Persons create obligations; the principle of universalization quantifies over them. It is regarded as self-evident that persons are the source of our moral duties (though Kant has a problem with animal ethics). But elsewhere in his philosophy Kant tells us that selves are noumenal beings—their essence is unknown to us. So the categorical imperative applies to beings whose nature we do not and cannot know. We know they are rational and autonomous (allegedly), but we don’t know what grounds these traits. We are admonished to respect entities whose real nature escapes us. Wouldn’t it be better to have an ethics that rests on knowledge of its essential subject matter? Maybe knowledge of the noumenal nature of selves would make us see that these entities are of supreme moral worth—one would assume that this must be so. But we are not granted such knowledge, so our ethical convictions lack the grounding necessary to them. This is a dramatic statement of what is implicit in common sense: we have only a vague idea what a self is, our own or others.[1] Our attitudes towards these elements of nature are solicitous, to be sure, but we don’t really know what grounds them, if anything. Maybe a better understanding of selves would convince us that we underestimate their value (that is particularly true in the case of animals), but we are not in a position to say. Also, moral skeptics might be kept at bay by a more penetrating understanding of the nature of selves: as things stand they can rightly complain that the basis of ethics rests on ignorance. The skeptic may protest, “Why do you say that selves matter so much?” and we have little to say in reply except to appeal to self-evidence. What if complete knowledge of the self were to demonstrate not only the validity of ethics but also the correctness of one ethical theory over others? What if knowledge of the noumenal self were to vindicate Kantian ethics as against utilitarian ethics? The trouble is that the ethicist is proceeding in profound ignorance; or worse, knowledge combined with ignorance. For we do know some of the attributes of the self—consciousness, rationality, will—but not its full nature; and the known attributes may bias us in a direction full knowledge would contraindicate.

We are familiar with the point that ethics depends on the persistence of the self over time: you can’t, for example, blame someone for what his earlier self did.[2] The metaphysics of the self is not irrelevant to the ethics of selves. But the point is more general: ethics cannot be independent of the nature of the self at a time either. If ethics were concerned with biological organisms as such, with no reference to psychological subjects, then it would have a clearly articulated subject matter and its prescriptions would be solidly grounded in knowable fact. But as things stand it is concerned with those mysterious entities called “persons” or “selves” or “subjects of consciousness”; and these are philosophically problematic. It’s a bit like applying ethics to mathematics without knowing what numbers are: doesn’t it matter whether numbers are abstract platonic entities or marks on paper or ideas in minds? But the self is even more elusive, to the point of being under suspicion of non-existence. A Humean about the self can hardly base her ethics on the principle of respect for selves! That would be the equivalent of a utilitarian who denies the existence of mental states. The problem of the self is the dirty little secret of ethics.[3]

 

Col

[1] We have only indexical ideas of the self not descriptive ideas: the self is what I am and what you are and what heis—it is this (pointing inwards). As Hume would say, we don’t have an “adequate” idea of the self, based either on experience or reason.

[2] Derek Parfit emphasized this point based on considerations about identity through time: see Reasons and Persons (1984).

[3] I don’t at all mean to assert that ethics is impossible without a resolution of the problem of the self; I just mean that it lacks solid theoretical foundations without a clearer conception of what the self is. That, at any rate, is a possibility we do well to take into account. My own feeling is that our rather glancing conception of animal and human selves leaves our ethical appreciation of selves seriously etiolated.

Share

History and Mystery

 

History and Mystery

 

 

How does the history of philosophy look to a mysterian? As follows: the history of philosophy is the history of our consciousness of mystery. Philosophy consists of a set of mysteries, possibly open-ended, and philosophers, as conscious beings, are aware of these mysteries—and aware of them as mysteries (no matter what they might say in professional moments). The philosophical state of mind is a confrontation with mystery felt as such. But this set may not be static; it may be that different mysteries are salient at different times. Given that philosophical preoccupations vary over time, the mysterian will expect to find a different set of mysteries dominating during particular periods. What is distinctive of the mysterian point of view is that it characterizes philosophical perplexity as an encounter with the mysterious—the baffling, incomprehensible, confusing, and elusive.[1] That is, philosophy is not just grappling with problems that are experienced as solvable and open to available methods, but as peculiarly taxing and prone to rational disagreement. I won’t attempt to say more here about what this mystery consists in; my aim is rather to sketch the overall shape of (Western) philosophy as seen from this perspective. What were the mysteries that occupied thinkers at different periods of history, and is there any pattern to the succession of philosophical mysteries that gripped people over time? I am not attempting to defend the mysterian position, nor even to articulate it further, merely to describe (sketchily) the history of philosophy as seen from its perspective. I think this provides an illuminating way to think about philosophical history.

The pre-Socratics were concerned largely with mysteries of the physical world: what things are made of, whether reality is one or many, do things change or remain the same. The empirical sciences didn’t exist at that time, so their suggestions were highly speculative (e.g. Greek atomism). To these thinkers the physical world presented many mysteries that could not be resolved by common sense or by any generally accepted method—the model of Euclidian geometry could not be applied. No doubt their intellectual attitude toward these questions resembled the attitude of later generations towards questions about the mind: physical nature will have seemed to them like a vast enigma. Their questions don’t strike us now as part of philosophy proper, though to them they may have seemed the way the mysteries of philosophy seem to us today. In any case the mysteries that concerned them pertained mainly to physical nature—to what they could see and touch. They wrestled with these questions as best they could, aware of their intractability.

Plato’s concerns were rather different: he was interested principally in mysteries of definition. What is knowledge? What is virtue? What is the just state? He was also concerned with reality and appearance, including the distinction between particulars and universals. His interests were centered on the human: human knowledge, human virtue, human nature, and human concepts. He found these questions exceptionally difficult to answer (or Socrates did), not part of existing human understanding. Socrates is forever asking people to define some concept or other and finding them irrationally overconfident; famously, he knows only that he does not know. Ignorance is standard; human knowledge is limited; real knowledge (of the forms) is difficult to acquire. There is mystery lurking in the most familiar of things. It isn’t just the world outside us that is mysterious; our own thoughts are mysterious—we are mysterious. Everyone could see that we know little about the external natural world, but it takes a philosopher like Plato to see that we know little about our own internal world. Plato’s pupil Aristotle is also concerned with matters of definition, though he prefers to speak of essence—the essence of existing things. Thus his concern with the essential nature of substances, causation, biological forms, and human virtue: he looks outward to the world not inward to our modes of representing it. He thus combines Plato and the pre-Socratics, simply put. His mysteries belong to external reality, though they emerge through the search for essences (hence “Aristotelian essentialism”). They belong to nature, but nature as viewed through Aristotelian categories. In Plato the mysteries come with a dose of mysticism, while in Aristotle no trace of mysticism can be detected—though he is still grappling with the most recalcitrant of problems.

The medieval period undergoes a shift of interest, despite its debt to Plato and Aristotle: God and the supernatural now become the field for the contemplation of mystery. No doubt this is mainly the result of the rise of Christianity: theological questions become paramount. The problem of evil, divine foreknowledge, the Holy Trinity, arguments for the existence of God, angels dancing on pinheads—all these become the mysteries of the moment. And they were apt to be characterized in just those terms, God being the ultimate mystery—thus the lucubration of Augustine, Aquinas, and sundry others. Now the mysteries exist in the supernatural realm not so much here on earth, in nature or human nature. Philosophers had to look upward not outward or inward. What is constant is the sense that philosophy deals with deep mysteries not merely problems solvable by the application of recognized methods. Whether these were pseudo-mysteries, born of misguided religion, is not to the point; they were conceived as grist for philosophy precisely because they were seen as genuine mysteries. Philosophy goes wherever the mysteries are perceived to be.

The modern period, in which philosophy as we now know it was largely formed, is characterized by two main problems, along with subsidiary problems: the problem of human knowledge and the problem of motion. The latter problem has now shifted to what we call “physics”, but at the time no sharp distinction was made by practitioners. Natural philosophers from Galileo to Newton tried to understand the nature and origin of motion; the problem was seen as presenting deep mysteries to the human mind in its effort to understand nature (the equivalent to today’s mind-body problem). Newton’s eventual triumph was not total, since gravity as a source of motion was agreed not to be intelligible by mechanistic standards (“occult” in Newton’s word). The mystery of motion was not fully resolved (and arguably still is not). In the case of knowledge the main question concerned the acquisition of knowledge (the “origin of ideas”) and two theories were formulated, rationalism and empiricism. Thus we have the efforts of Descartes, Leibniz, Locke, and Hume (among others). Solving this problem was not understood as routine empirical inquiry but as steeped in imponderables and controversy. Human knowledge was seen as a mystery, a puzzle, something calling for distinctively philosophical work. And it is indeed still a mystery how human knowledge is possible, despite the efforts of “learning theory”. How we come to know mathematics, for example, is still not understood at a fundamental level. The mind-body problem was also much discussed during this period—also still a mystery today. It isn’t that the dawn of science saw the conversion of philosophical mysteries into tractable scientific problems; rather, natural philosophers began to appreciate more fully the mysteries inherent in knowledge and motion. New mysteries were added to old.

Not much later the mysteries of metaphysics began to assert themselves, particularly in the writings of Kant, and later Hegel and others (e.g. Schopenhauer). Reality and appearance, space and time, necessity and contingency, the self, idealism versus realism, monism versus pluralism—all the problems of traditional metaphysics are extensively debated. Again, this was not a matter of routine science, still less common sense, but was understood as a confrontation with profound mysteries that stretch or exceed the powers of the human intellect. Philosophy changed its focus with Kant, but it did not change its preoccupation with mystery; it did not perceive itself as moving to a new phase in which mysteries gave way to mere problems. The torment of philosophy never went away. The mysterian sees in this the essential connection between philosophy and mystery. Mystery is not associated only with the earlier immature phases of the subject, nor with the medieval emphasis on religion and the supernatural, but is part of the very texture of the subject–the impenetrable, the obscure, the maddening.  Anti-metaphysical positivism was the welcomed antidote to this sense of oppressive mystery—the promise that all confounding mystery could be banished by appeal to the principle of verifiability. Mysteries consist of unanswerable questions, but there are no such questions because every meaningful question must be answerable. It was not so much a priori metaphysics that the positivists objected to; it was mysterious metaphysics, the kind that resists resolution. If metaphysicians had been able to reach consensus on their questions it wouldn’t matter that they eschewed methods of empirical verification; the real problem was that the questions of metaphysics presented insoluble mysteries. Why labor at questions you can never resolve? Kant’s entire system is really a rebuke to the human intellect (the noumenal world is completely impenetrable to the human mind); positivism promised to do away with all such metaphysical mystery by adopting an exclusionary theory of meaning. That was its main motivation and appeal.

Twentieth century philosophy then dedicates itself to pondering the general nature of meaning, making the “linguistic turn”; but here too mysteries abound, controversies rage, frustration descends. Frege, Russell, Wittgenstein, Carnap, Quine, Austin, Kripke, Davidson, Grice, and others: they all tried to lay the mysteries of meaning to rest, but those mysteries persisted. The state of philosophy as mystery management did not fundamentally change, just the nature of the mysteries being studied. Theory of meaning never turned into a branch of science; it remained as philosophy with its characteristic sense of puzzlement. Anti-metaphysical philosophy of this type is not mystery-free philosophy; indeed, the focus on meaning only accentuated the sense that philosophical questions resist resolution. Wittgenstein exemplifies it best: the Tractatus is a deeply mysterious (and mystical) work, but the Investigations also raises profound puzzles about meaning, despite its ostensible complacency. Philosophy of language is just philosophy as usual, complete with its own roster of puzzles and paradoxes (Chomsky has always been willing to accept that language presents genuine mysteries).

In the present day we have the mystery of consciousness, but also mysteries of free choice, imagination, creativity, dreams, and thought. The currently accepted mysteries cluster around the mind, human and animal. To some extent these are new mysteries, or new versions of old mysteries. Calling them mysteries is perhaps new, at least within the last hundred years (Hume had spoken of “mysteries of nature” in the eighteenth century). Chomsky has been using this language for many years and the term “mysterian” was introduced to describe my position on consciousness thirty years ago. Whether the existence of these mysteries shows anything about the limitations of the human mind is a separate question; for all I have said here the mysteries might stem from objective reality (we live in an inherently mysterious universe). I have only suggested that the history of philosophy can be described as an engagement with mysteries whatever their provenance may be. The point of this exercise is to draw a distinction between philosophy and other disciplines: philosophy is characteristically concerned with mysteries, while other disciplines traffic in them only incidentally. You can write a book called The Problems of Philosophy and be expected to deal with the kind of mysteries I have enumerated; a book called The Problems of Physics will not deal with the mysteries of physics but with the achievements of that field. Even psychology, an undeveloped science, does not deal in mysteries in the way philosophy does (though philosophical problems certainly arise within psychology): its problems arise from lack of data and lack of theory, not from recalcitrant conundrums or conceptual roadblocks.

So philosophy has a distinctive kind of history—a distinctive kind of psychological history. The psychology of the philosopher differs from the psychology of other seekers after truth: it might be described as ecstatic despair(if I may be allowed a degree of hyperbole). Ecstatic, because of the grandeur of its questions; despair, because of the difficulty of answering them (sometimes even formulating them). It is like being in contact with an elusive god: the object is radiant, but the access is limited. Philosophy thus produces a curious mixture of optimism and pessimism: optimism born of being able to think about these questions at all, but pessimism born of constant failure to answer them. Oh how wonderful it would be to solve these problems! But oh how lowering it is to come up with nothing! Doing philosophy is an exercise in hubris and humility: hence the pained expression on the face of philosophers (true philosophers), but also the wild glint in the eye. The philosopher longs to discover things, to write up his or her results and announce them to the world; but alas it is all controversy and rejection, doubt and neurosis. We want to see into the nature of things and take their measure, but they remain maddeningly elusive. Still, we cannot quench the feeling that this time we have it right… Thus the life of the philosopher is apt to be veering and halting, or else digging dogmatically in; the serenity of certain knowledge is not ours to be had (except in very limited ways). We live with the consciousness of mystery, while committed to unraveling it. We are like babies learning to walk—we get up on those rubbery legs and totter a few paces before collapsing on the floor (but we gamely get up again to totter a few more wobbly paces). We don’t feel merely ignorant, remediably so; we feel stupefied, nonplussed. No amount of further study can remove our bafflement. No grant is large enough to resolve our perplexities. The mystery bears down on us.[2]

Of course, there have been attempts to deny mystery: it’s all pseudo- questions, nonsense, confusion; or it will all eventually turn into regular science; or it has already been taken care of by the latest theory from Professor X. There are no real mysteries, just ordinary problems or conceptual snarl-ups. That too is part of the history of philosophy from the mysterian standpoint—the need to deny that philosophy deals with mysteries. Meta-philosophy is part of the history of philosophy and it often takes the form of denying the mysterious character of philosophical problems. For the mysterian, however, this is predictable, since the human mind is impatient with mystery and would like to be rid of it. So the history of philosophy will be marked by attempts to deny the truth about the nature of philosophy. For philosophical perplexity is irritating—it disturbs the mind, won’t let it rest. The mind wants badly to solve mysteries (hence the appeal of detective stories) and it grows peevish when denied that satisfaction. Thus the philosopher is forever irritated and exhilarated at the same time. Acceptance of mystery is difficult, and there is always the possibility of total triumph! So it has gone in the past, and so it will continue to go in the future. I don’t expect the future of philosophy to be any different from its past, though new mysteries may rear their heads as time goes by.[3]

 

Colin

[1] See my Problems in Philosophy: The Limits of Inquiry (1993).

[2] I don’t mean to say there are no lighter moments when dawn breaks and clouds part—and there is always the thrill of refutation. But the experience of solving problems to one’s own and others’ satisfaction, central hard problems, is not one that we can hope to enjoy. At best we can argue for positions, not announce results. (I used to be an experimental psychologist where at least one can tabulate data and perform statistical tests on them. And think what a field biologist can hope to discover!)

[3] My guess is that the brain will come to seem the mystery par excellence, much more so than today; also the mysteries of physics will become more widely acknowledged.

Share

World and Head

World and Head

 

 

When Hilary Putnam made the claim that meanings are not “in the head” he emphasized the indexical character of natural kind terms. His point was that terms like “water” have their reference fixed by demonstratives like “that liquid”, and demonstratives have their reference as a function of context not descriptions in the mind of the speaker. In David Kaplan’s terminology, context yields content in conjunction with character—character alone cannot determine reference. Thus two people could be internally indistinguishable and yet refer to different things with “that liquid” depending on the actual physical context (H2O or XYZ). Since the anchoring demonstrative fixes the reference of “water”, that term too will vary its reference according to context—hence Twin Earth cases. Strictly speaking, Putnam overstated his conclusion, since all that follows is that an aspect of meaning is not in the head—the aspect Kaplan calls content; character does not vary in this way, being completely context-independent. He should have said that part of meaning is not in the head, the part that is “in” the context. The meaning of an indexical is always a two-component affair: the component that results from context and the component that is “in the head”. In the case of a demonstrative like “that liquid”, the first component corresponds to the external natural kind being demonstrated, while the second component may be identified with something like the perceptual appearance presented by the demonstrated object or kind. If someone were to claim that the meaning of a demonstrative is completely outside the head, the reply would be that an aspect of its meaning is clearly inside the head—the aspect Kaplan calls character. Indexical meaning is double-aspect.

So far, so familiar: what I want to ask is whether the natural kind itself is wholly “outside the head”. Is what we mean by “water” something completely divorced from its appearance to human minds? Is the reference of “water” purely an objective matter? Or is the reference partly constituted by what is in the mind? Water has both a hidden essence, captured by “H2O”, and a superficial appearance, captured by “transparent, tasteless liquid”: is only the first of these constitutive of water? It might be tempting to suppose so—it is necessary and sufficient for something to be water that it be H2O. But that has to be wrong: in a possible world in which something is H2O but has none of the manifest properties of water we would not say that that stuff is water. If it had all the manifest and dispositional properties of honey, it would be wrong to classify it as water, no matter that it is composed of H2O. This might prompt the retort that nothing could be H2O and have the manifest properties of honey, because of the necessary connection between chemical composition and manifest properties; but (a) not all the manifest properties of water follow simply from its being H2O, and (b) this is to concede that manifest properties also count as necessary conditions of being water. In fact, being tasteless and colorless are parts of the nature of water, along with its chemical composition. Natural kinds really have a double nature—underlying real essence and apparent nominal essence (to use Locke’s terminology). Both are necessary to being water, neither being sufficient.

Now we come to the more difficult question: is nominal essence tied essentially to the mind? Is it “in the head”? There is an obvious precedent for such a claim, namely colors and other secondary qualities: objects only have colors in relation to minds—so colors are “in the head”. That means that colored objects (qua colored) are partly in the head: an object is red only because of its relation to a mind that perceives it as red. Colored objects are partly in the world and partly in the head—they have both aspects. In the case of water we can likewise claim that being colorless and tasteless are also secondary qualities: things only have these qualities because there are minds that respond with certain types of sensory experience to them. But there is a further point: the very concept of a manifest property is tacitly mind-relative. Manifest to whom? Properties are only manifest in relation to minds that can perceive them; to beings with different senses from ours chemical composition might be manifest while color and taste are inferred. Real and nominal essence could conceivably be reversed. As things stand, however, the properties we attribute to the surface of water are mind-relative, so that this aspect of the nature of water is mind-dependent. It is a function of how our senses represent the world. Water, as we commonsensically conceive it, is thus partly “in the head”. In fact, the part that is in the head coincides with the part of meaning that is in the head, viz. the sensory appearance. Thus meaning has two aspects, one in the head and one not, and likewise natural kinds have two aspects, one outside the head (chemical composition) and one not (sensory appearance). Meanings have a worldly component and a mental component, but so do the objects we refer to. In the sense that meaning is not “in the head”, so objects are not “in the world”, i.e. they are partly so. It is the fact that objects have a foot in both camps that enables meaning also to have a foot in both camps. Surface properties correlate with demonstrative modes of presentation, while hidden properties characterize the reference, as it exists independently of such modes of presentation. The duality of meaning thus maps onto a duality at the level of reference: the inner component of meaning contains the manifest properties of the reference, while the outer component corresponds to the non-manifest properties. Meaning is a hybrid of internal and external, as the character-content analysis of demonstratives suggests, while objects themselves divide into an objective aspect and an aspect that is tied to perception. The distinction between the scientific image and the manifest image is mirrored in the distinction between content and character, respectively—what is fixed by context or environment or causation and what is wholly subjective or “in the head”. To put it differently, the objects of common sense have a dual component analysis corresponding to real and nominal essence.[1] The abstract structure of objects is thus reflected in the abstract structure of meaning: a bit in the head plus a bit not in the head. Character goes with surface and content goes with hidden. There is more to meaning than what is in the head, and there is more to objects than what appears to heads—but we must not forget that the head-involving aspects are also part of meaning and of objects. Meaning is not only what lies beyond the head, and objects are not only what is independent of appearance; meanings and objects are both hybrids. Meanings are composites of two factors, as Putnam taught us (with a little help from Kaplan), but so also are objects.

 

Col

[1] Here we might be reminded of Eddington’s two tables: the table of science and the table of common sense. The former has nothing mind-dependent about it while the latter is a projection of mind. Eddington in effect reifies the distinction of aspects or components that I am suggesting. Frege’s sense-reference distinction also finds a counterpart in the distinction between real and nominal essence—roughly, the distinction between the known and the unknown properties of objects. Senses are necessarily known by one who grasps them, but it is possible to refer to things and know little about them. Then too, we have Kant’s bifurcation into the phenomenal and the noumenal. In philosophy later distinctions often trace back to earlier distinctions. And what seems unitary often turns out to be divided.

Share

Experience and Fact

 

 

Experience and Fact

 

 

We normally suppose that experience and fact are separate entities. Suppose I observe my cat chasing a lizard: on the one hand, there is the fact of my cat chasing a lizard; on the other, there is my experience of my cat chasing a lizard. These could exist separately—there is a dualism of experience and fact. I might report the fact in question by saying, “My cat chased a lizard”, and I might report the experience by saying, “I had a visual experience of my cat chasing a lizard”. But these sentences mix up fact and experience, because the first refers to me in stating the fact in question, while the second refers to my cat in stating what I experienced. Shouldn’t we be able to describe fact and experience in their own terms? So let’s try again: suppose I say, “This cat chased a lizard” in an effort to keep myself out of the picture, sticking only to the fact itself. That looks better, but what does “This cat” mean? Doesn’t it mean, “The cat I am experiencing”? I am experiencing a cat and I use this fact to refer to the cat by employing a demonstrative: the demonstrative refers back to my experience in using it.[1] So I have not succeeded in keeping myself out of the picture; the picture still contains a reference to myself in the corner. Worse, I have referred to my experience in trying to report a fact about the cat and the lizard. On the other hand, I scarcely remove the reference to the world in describing my experience by substituting “this cat” for “my cat”: now I am saying that I had an experience of this cat chasing a lizard not my cat—still a living breathing cat. I refer to experience in reporting the fact and to the fact (or a component of it) in reporting the experience. But if there is an ontological dualism here, why am I mixing up the two categories? Can’t I report the fact as it objectively is and the experience as it subjectively is?

It might be thought easy to achieve that ideal: I just substitute a name for the cat in the former case and an indefinite description in the latter case. Thus “Blackie chased a lizard” and “I had an experience of a cat chasing a lizard”: in the former there is no reference to my experience, and in the latter there is no reference to a particular cat. But matters are not so simple. How does “Blackie” refer to its feline bearer? Plausibly by trading on a prior demonstrative reference–either at an initial baptism or maybe as an ongoing prop. A perfectly general description is seldom if ever available and is certainly not the typical case; we generally rely on a bedrock of demonstrative reference to secure the reference of names. At root, then, our names refer back to our experiences—as in “the object I am seeing”. So we are still not succeeding in describing the facts purely, as they objectively exist. In the case of descriptions of experience we have a different problem: I am not experiencing any old cat as chasing a lizard—some cat or other—but a specific cat, viz. Blackie, my cat, this cat. There is a particularity to the experience that is not captured by the indefinite description “a cat”; so we need a singular term to do justice to this aspect of the experience. The natural choice is a name (if you know it) or a demonstrative (if you don’t): “I had an experience of Blackie chasing a lizard” or “I had an experience of this cat chasing a lizard”. But then we are back to referring to an actual singular cat. We can’t keep the world out of descriptions of the experience, as we couldn’t keep experience out of descriptions of the world. Yet aren’t these separate domains?

One response is to accept that they are not separate domains. Idealism says that facts are experiences, so it isn’t surprising that we can’t describe the one except by reference to the other: the fact of a cat chasing a lizard is just the occurrence of an experience of a cat chasing a lizard. On the other hand, externalism maintains that experiences are constituted by relations to worldly objects, so that my experience just was composed of a particular cat (inter alia): I wouldn’t have that experience unless it was directed at a particular individual cat. The reference to a specific cat simply reflects the actual nature of the experience, according to externalism. Thus there is no dualism of the sort described earlier: facts and experiences are inextricable. I won’t try to adjudicate this issue now; I have merely indicated a possible route to the doctrines in question. Our habitual modes of description can encourage the rejection of a strict dichotomy between facts and experiences. What I want to suggest is that there is room for an alternative view: granted we can’t find a way to exclude reference to experience from descriptions of fact, and reference to fact from descriptions of experience, but that is really a point about us not about reality. From the fact that we can’t describe facts and experiences without importing alien elements from the other side, it doesn’t follow that these things can’t exist without each other or that they have dependent natures. Ontological dependence doesn’t even follow from the fact that no one could describe the two independently. For it may be that we are limited by our language and cognitive resources to describing things in ways that don’t reflect their real nature. A cat could be chasing a lizard even if no one ever experienced that fact, though reports of the fact inevitably introduce reference to the reporter’s experience (even if only implicitly). And it could be that someone has an experience just like mine when I see my cat chasing a lizard, even though there is no cat there and hence there is no reference for “this cat”. Maybe I can’t capture the singularity of the experience without employing an existentially committal singular term, though the experience itself has a nature that is not dependent on the reference of such terms. So the dualism of fact and experience is not compromised by constraints on description, assuming such constraints to exist. Language is designed to fulfill certain practical purposes and these may not include capturing the objective nature of things (completely, purely). We talk about facts by relating them to our experiences, and we talk about experiences by relating them to facts (objects, properties), but that need not imply that reality itself is similarly structured. Perhaps we could invent new ways of talking that capture things more accurately (objectively, intrinsically), but as things stand we are apt to describe the world in something short of a logically perfect language.[2]

 

Co

[1] There is much to be said about the relation between demonstratives and perception, but it is clear that in standard cases one refers to the object that one is perceptually attending to, as if with an act of inner pointing. Without sense perception demonstratives would have little use—the audience needs to perceive what the speaker is referring to in order to understand the utterance. The mode of presentation associated with a demonstrative is perceptual, i.e. involves an experience of a certain object. But then demonstratives embed in their meaning acts of perceptual acquaintance—while the fact being reported is not itself a fact of perception.

[2] Physics might be regarded as the right kind of language to use to describe physical reality as it is in itself, using no indexical language to do so; but there doesn’t appear to be anything comparable that rids psychology of all reference to things outside the mind. It is hard to reconstruct commonsense psychology in purely non-externalist terms. Could there be a pure phenomenology that described experience without employing any terms applicable to physical objects? This would include even general terms for types of physical object such as “cat” or “square”.

Share