Inner and Outer

                                                Inner and Outer

 

 

We have the concepts of the inner and outer, but how are they to be defined? Is one more basic than the other? The following seems true: the outer is not the inner and the inner is not the outer. Alternatively, we could say the private is not the public and the public is not the private. But this doesn’t help as a definition, owing to circularity: we can’t define the inner as what is not the outer and then go on to define the outer as what is not inner. I propose that the concept of the inner is basic and not to be defined as the negation of the outer, while the concept of the outer is derivative and can be defined as the negation of the inner. We know what we mean by the inner before we get to the concept of the outer, and we can use the former concept to define the latter. The outer or public is simply that which is not inner or private; but it is not true that the inner or private is to be understood as what is not outer or public. The inner comes first conceptually.

            It is not easy to prove this, though it has strong intuitive appeal. We really do have a robust conception of the inner derived from our acquaintance with our own consciousness. Our concept of the outer is acquired by way of contrast: it is what does not lie within the realm of the inner. The outer is what lies beyond consciousness, outside of it. But the inner is not what lies beyond outer objects—say, objects of perception. We don’t understand the inner simply as what is not an outside object. We don’t conceive of a sensory experience, say, as simply what is not the outer object of that experience; we have a direct grasp of the concept. Our concept of the inner is not a negative concept. But our concept of the outer is a negative concept—the concept of what is not included in the inner. The outer is what exists on the other side of the inner—yonder, over there. The public is what is not private.

            Let us accept this conceptual primacy thesis: the inner is fundamental in the order of definition. Then the consequence to which I would like to draw attention is that the inner cannot be reduced to the outer, nor eliminated from our ontology. We need the concept of the inner in order to ground the concept of the outer—we can’t have the one concept without the other. This doesn’t yet entail that the inner has to exist, since it is possible to have the concept of Fs without there being any Fs (consider unicorn). But the best explanation of why we have the concept of the inner is surely that we are acquainted with things that are inner: we are acquainted with inner things and hence we have the concept of the inner. To deny this one would have to maintain that we have the concept of the inner, and apply it liberally, yet we have never encountered anything satisfying that concept. That would seem an unlikely state of affairs. Far more plausible is the simple thought that we have the concept of the inner because we have encountered things that are inner—namely, our states of consciousness. So (a) we need the concept of the inner to give sense to the concept of the outer, and (b) we can’t have the concept of the inner without there being instances of the inner. Therefore we cannot reduce the inner to the outer (say, behavior), nor eliminate it. It cannot be that only the outer exists.

            This refutes behaviorism. Granted that behavior is something outer, the doctrine of behaviorism maintains that there is only the outer—nothing is inherently inner. But the concept of outer behavior is defined by reference to the concept of the inner, and the concept of the inner rests upon acquaintance with inner things. So behaviorism entails the existence of the inner! There cannot be only public things because the concept of the public is defined by contrast with the concept of the private, and the concept of the private can exist only if private things exist. Put simply, I can only formulate behaviorism because I know that I have inner states that contradict it. I arrive at the idea of behaviorism by deploying the concept of the outer, in contrast to the concept of the inner, but then I need the concept of the inner, which I would not have but for the existence of the inner. So I cannot consistently assert that there is nothing but the outer: for the only way I can assert this is if it is false. Since I come to grasp the concept of the outer or public via my grasp of the concept of the inner or private, and since that latter concept is grounded in acquaintance with my own inner private consciousness, it cannot be that there is nothing in the world except what is outer and public. The simple fact is that our grasp of the distinction between inner and outer depends on our awareness of our own states of consciousness: we apprehend our consciousness as inner.

 

Colin McGinn

 

Share

Speech

                                                           

 

 

 

Speech

 

 

When a person speaks he or she enunciates words one after the other, producing a temporal sequence of words. Normally these words are used not mentioned. It is natural to assume that nothing meta-linguistic is going on: one word is used, then another, then another, until the utterance is complete. A theorist may mention the words uttered by the speaker, but normally the speaker doesn’t. Speech normally consists of sequenced word use, with no mentioning in the vicinity.

            But is this right? Isn’t it true to say that the speaker combines words to form phrases and sentences? There is an operation called “combination” that the speaker (or his brain) applies to words. This operation permits a synthesis to be formed—there is an act of synthesizing words. But combination and synthesis are precisely operations on words. They take words as arguments and produce wholes composed of words as values. These operations are psychological in nature—indeed, they are intentional. The speaker intends to combine words into a sentence, or to produce a particular synthesis of words. The operations thus incorporate reference to words: “Combine ‘John’ with ‘ran’ to get ‘John ran’”. They are meta-linguistic; they involve the mention of words. But then speech inherently involves mention as well as use: the words uttered are used and mentioned simultaneously. We use words in grammatical combinations, and those combinations are the result of meta-linguistic acts or operations. The same goes for writing, perhaps even more clearly: writing a sentence is performing a combinatory operation on words. We use words in sentences as we write, but we do so by representing (denoting) them in acts of mental combination. When I wrote that last sentence I intentionally selected certain words and combined them in a certain way, referring to them mentally. Words are thus objects of intentional mental acts in both speech and writing: they are referred to as well as referring. The hearer or reader must apprehend words, taking them as objects of intentionality, but the speaker or writer is no different—he too must take words as intentional objects. The mind must mention the words it uses.

            We can think of the speaker as following instructions (rules) for the production of sentences, and these instructions refer to words. That is essentially what a grammar is. So there is mention wherever there is use; use depends upon mention. Logicians and linguists talk about the concatenation operator, which connects quoted expressions–as in “John”^”ran”, to be read “the word ‘John’ concatenated with the word ‘ran’”. Grammatical rules can be written by invoking the concatenation operator. If we think of the act of linguistic combination or synthesis as (rule-governed) concatenation, we can say that speakers perform an act of concatenation on words whenever they utter a sentence—they in effect join one quoted expression with another. They are quoting themselves as they speak—mentioning what they are using. There is a double act of reference going on: “John” is being used to refer to John, but it is also being referred to itself in the act of concatenation.  The total speech act consists of using words and also mentioning them. This is because speech is not just uttering words in temporal sequence but also selecting and combining them according to rules. In other words, there is a plan behind the utterance and this plan involves arranging words in a certain order. You choose words and you choose the order in which to arrange them: both of these are meta-linguistic intentional acts. Speech is therefore always about words as well as about things. Speech is always conscious of itself as speech.  [1]

 

Colin McGinn  

             

  [1] Or unconscious of itself: we are not generally conscious of the mental operations that generate speech. But the operations necessarily take words as objects, so the mind-brain mentions them—the internal computations are defined over words. Of course, sometimes we quite self-consciously select what words to utter and we assemble them with care: then we are clearly consciously thinking of words as a preliminary to using them. In speech we focus on referring to things in the world by using words, but we also direct our minds to the words we use.

Share

Raindrops and Persons

                                                Raindrops and Persons

 

 

Consider a single raindrop: it exists for a certain period of time, eventually meeting its end through evaporation. But suppose instead that it divides into two before that happens—by wind or human intervention. Now we have two raindrops where before we had one: what is the relationship between the original raindrop and the resulting raindrops? The first thing to say is that the case is not like evaporation: the raindrop is in some sense still around after division, unlike with evaporation. It survives division, though now existing as two separate drops. It could easily be reconstituted, simply by joining the two drops together again. It has not simply gone out of existence or been totally destroyed. Yet we cannot say that the original drop is identical to the resulting drops, since they are not identical to each other; nor can we choose one of them as identical to the original and not the other. We might try saying that the original is identical to both drops together, but then we have to admit that one raindrop can exist as two.  [1]

Evidently there is a bit of a puzzle about division and identity—identity seems too simple to handle the case. So we might be inclined to say that raindrop survival does not depend on raindrop identity: a raindrop can survive even though no future raindrop (or collection of raindrops) is identical to the original. Still, we should note that a weaker identity claim is true: each of the resulting raindrops is identical to a part of the original raindrop. It had parts and they became separated, but they are the very same parts that existed earlier. And the survival of the original raindrop seems bound up with the fact that parts of it are identical with raindrops that exist in the future; the raindrop still exists because certain future raindrops are its parts. So identity is playing a role in survival, though a slightly more complex role than when the raindrop continues to exist in an undivided state.

            Now consider cell division. A cell can either divide or perish: if the latter occurs it has not survived, but if the former occurs we are inclined to say that it has survived—as two cells. Division is not the same as death. Again, we have a bit of  a puzzle about identity, since the future cells are not simply identical with the original, but one thing is clear: the future cells are identical to parts of the original cell. So identity is playing a role in survival, even if it is not the role it plays in cases of undivided survival. We can admit that in cases of cell division there is survival without identity between the original and any of its progeny, but still we have identity at the level of parts. Certainly, there is no reason to abandon identity altogether in accounting for survival in such cases, replacing it with some such notion as causal continuity or mere similarity. Future cells are literally identical with parts of earlier cells, and that is why we distinguish division from sheer destruction. The general principle seems to be that if the parts of an object are identical with any future objects then that object can be said to survive in those future objects. No doubt this principle would need to be qualified, but it roughly fits our intuitions about certain cases.  [2]If I disassemble a car engine and scatter its parts, we may suppose that the engine survives, even though it is not identical to any of these parts; still they are identical with parts of the original. Or if I saw an eight-legged table into two, thereby creating two tables, it may be said that the original table survives, despite not being identical with either table (and possibly not with the sum of both). The table certainly has more of a claim to continued existence than a table that is burnt to ashes.

            Then we come to brain division. Here again it is awkward to say that the brain survives in virtue of identity with the future brains, but we can still say that the future brains are identical with parts of the original brain—its two hemispheres. If the brain is a mouse brain and I put the two halves in separate mouse bodies, then we have two mice as a result–neither of which is identical to the original mouse. But it is still true that each mouse has a brain that is identical to a part of the original mouse brain. This is why we are inclined to say that the original mouse survives—certainly more than if it had been incinerated. An intelligent mouse would have a reason to favor brain division over brain incineration. We have survival without identity of mice—no future mouse is identical to any past mouse—but not survival without identity to any past part of a mouse. The part lives on, identically, though now separated from the other part, which also lives on, identically. It is not that identity is playing no role in survival, so that we must resort to some other notion, such as causal continuity. So far as these cases show, survival is still conceptually demanding—it doesn’t collapse into some weak notion of causal connection between successive states. It is not that these entities survive merely by causing future things to have certain states; they survive by virtue of there being future objects that are identical to original objects—but parts not wholes.

            It is the same with human persons. Fission cases show that survival doesn’t depend on identity with any future person, but they don’t show that survival is possible without identity between person parts. The cases are convincing precisely because half the brain of the original person sits in another body—and that half brain is identical to part of a past whole brain. The person survives because his two brain hemispheres are identical to the brains of two descendants—that, at any rate, is the intuition tapped into by brain fission cases. I would survive in one hemisphere because it is my hemisphere, and doubling up doesn’t undermine that; but if the future person did not possess a brain literally identical to half of my brain, I would not be so sanguine. I would certainly not be so sanguine if I was assured merely that the future person would be causally continuous with me in certain ways. Survival, intuitively, does require identity, but it can be identity at the level of parts not wholes.

            Does any of this conflict with our commonsense view of persons? No, it is all part of common sense. It no more conflicts with common sense than the analogous view of raindrops, cells, and mice: we quickly see that in cases of division we need to tinker with identity to explain our sense that there is survival in such cases. But we are not metaphysically committed to a mistaken view of the nature of persons—any more than with raindrops and cells. Nothing radical is demonstrated by fission cases in any of these areas. If personal fission were as common as other kinds, we would have no hesitation in describing fission cases as survival without identity (of whole persons); but we quickly see the point once an imaginary case is constructed. Our concept of a person prepares us for the possibility of fission cases, and our knowledge of the brain makes this concrete because of the facts of brain anatomy (two separate hemispheres). There is no revisionary metaphysics here, no deep error about the ontology of persons. We don’t need to replace our ordinary notion of the self with a new one that weakens it to something like mere continuity of psychological state. Brain division cases by themselves have no such dramatic consequences.

            It is sometimes claimed that we have an unduly simple view of the self and its survival because of the word “I”: the word looks simple, so we suppose that its bearer must be. This is a preposterous suggestion: who would commit such a gross non sequitur? Simple things can have complex names and complex things can have simple names. Do we have overly simple views of time and space because of the simplicity of “now” and “here”? And people often have long proper names, as well as lengthy descriptions, not just short personal pronouns. It would be absurd to infer a simple view of the self merely from the syntactic simplicity of some terms for selves.

Nor is it true that we naturally have an all-or-nothing view of personal survival, failing to recognize the possibility of degrees of survival. We don’t have an all-or-nothing view of survival in general (sand dunes, cities), and we recognize that the person of childhood may not fully survive into adulthood, as well as that dementia can weaken personal survival. We are well aware that personal survival requires psychological overlap and that survival can be a matter of degree. Just consider pushing fission cases further so that we halve each hemisphere and retain only certain aspects of the original person—we rapidly conclude that full survival is not guaranteed in such cases. There may be partial survival, which is something, but not survival of everything that matters. We don’t have an all-or-nothing view of the survival of persons (or animals). We know that in certain cases there is no survival at all, but we don’t have the naïve idea that survival is always either complete or completely absent. Maybe there are some people who do, but it is not part of general common sense. So common sense has no need of revision in this regard.

            If there is anything especially puzzling about personal fission, it is that we don’t have a clear idea of a person having parts in the way a physical object has parts. A person’s brain has literal parts and that is what guides our intuitions in fission cases, but it doesn’t follow that persons have parts: I don’t divide into two person-like hemispheres. Thus we cannot easily say that the resulting two people are identical to parts of the original person, because we don’t normally think that persons have persons as parts. But split brain cases suggest that we really do have parts that are persons—that our sense of the unity of the self is mistaken. In any case, puzzles about the divisibility of the self should not be taken to show that fission cases undermine the basic model supplied by raindrops and cells, namely that survival arises from identity as to parts. Raindrops and cells, like brains, are complex entities that have parts; these parts can be detached from each other and the results are also raindrops, cells, and brains; these results count as the survival of the original entity, in contrast to more drastic changes. We have no more reason to revise our common sense view of persons than we do of raindrops and cells in the light of fission cases. No doubt the self is philosophically puzzling, and skepticism about its existence can be pressed, but brain fission cases show nothing remarkable about personal survival. Survival without identity is commonplace, intelligible, and non-revisionary; and it involves identity anyway (as to parts). By judicious deployment of parts you can make two persons from one: in such a case the original person can be said to survive—in virtue of those parts. Similarly, you can make two raindrops from one, and you also get survival from the persistence parts. Logically, the two cases are on a par.  [3]

 

  [1] I won’t explore this possibility further in this essay, but I don’t think it is absurd. For purposes of argument I will suppose that it is ruled out; we can still maintain an identity view of survival, as I shall argue.

  [2] No just any parts: reducing an object to its atoms looks a lot more like destruction than other sorts of division into parts. I won’t here consider what notion of part is needed to make the right distinctions.

  [3] As will be obvious to many readers, I have been discussing views associated with Derek Parfit without going into questions of precise attribution. If Parfit’s views differ from those I discuss, my criticisms don’t apply to him. However, I think the views I discuss correspond closely with what he has maintained. I have not considered whether it is possible to motivate a continuity view of the persistence of the self on the basis of considerations independent of fission cases; my point is that such cases fail to take us in that direction.

Share

Consciousness and Sleep

                                               

 

Consciousness and Sleep

 

 

Why do we say that a person is unconscious when asleep but conscious when awake? During sleep we often dream and dreaming is a conscious activity of mind, so why don’t we say that we are often conscious while sleep? Why do we speak as if sleeping and consciousness are opposed? Similarly, why do we suppose that being awake rules out being unconscious? It doesn’t rule out the existence of unconscious mental activity, so why should it rule out being simply unconscious—no conscious mental activity at all? Couldn’t a person be awake in the normal sense—mobile, eyes open, and alert—and yet there is a break in consciousness, a gap in the stream? Couldn’t wakefulness be punctuated by periods of unconsciousness, as sleep is punctuated by periods of consciousness? Couldn’t someone go into blindsight mode every once in a while and still be said to be awake?

            Part of the problem is defining what it is to be awake or asleep. The OED defines “sleep” as follows: “a regularly occurring condition of body and mind in which the nervous system is inactive, the eyes closed, the postural muscles relaxed, and consciousness practically suspended”; while “awake” is defined simply as “not asleep”.  [1] But couldn’t these conditions for sleep be met and yet the person (or animal) is still awake? What if I relax on my bed with my eyes closed and think only of sheep—aren’t I still awake? What if I take a drug that cuts off sense perception, paralyzes me, and leaves me with only the most minimal consciousness—could I not still be awake?  And what about the clause about consciousness being “practically suspended”? It doesn’t seem at all suspended during an intense nightmare or an ardent erotic dream. I suspect the editors mean “consciousness of the environment”, but that is neither necessary nor sufficient for being asleep: you can be asleep and aware of outside stimuli to some degree (hence the qualification “practically”), and you can be wide awake while having no awareness of the environment but only of yourself. Clearly, we have poor definitions of what sleep is, and likewise wakefulness. We don’t have a good analysis of what these states involve.

            The point remains that the logical connections between “asleep” and “unconscious” and “awake” and “conscious” are opaque at best. There seem in fact to be no logical connections here: we can experience states of consciousness during sleep, i.e. dreams, and we could go consciously blank while awake, i.e. not asleep. Being asleep is roughly a state involving being relatively unreceptive to the environment, but that doesn’t preclude a vivid inner consciousness such as occurs in dreams. Being awake is roughly being alert to the environment, immersed in it, but that doesn’t preclude moments of conscious blankness, as in blindsight. We say of simple organisms such as insects that they are awake, i.e. not asleep, but we don’t take this to entail that they are sentient beings. So there appears to be no analytic or necessary link between the awake/asleep distinction and the conscious/unconscious distinction. Indeed we could invert our usage and incur no charge of logical infelicity: we could say that we are conscious while asleep (though not all the time) and that we are unconscious while awake (though not all the time)—that is, we could say the latter if we actually had periods of conscious blankness while awake. Just as we are sometimes conscious during sleep, so that sleep is not necessarily a time of unconsciousness, so we could sometimes be mere zombies while awake, so that being awake is not necessarily a time of (uninterrupted) consciousness. You can be lying down in the dark fast asleep and have conscious experiences inwardly, and you could be moving around the world completely awake (“not asleep”) and be experiencing nothing consciously at all—you are in your robot phase. As things are, we are (apparently) conscious during the entirety of wakefulness—though there might be brief unconscious interstices—but we could be only intermittently conscious, perhaps so as to relieve the consciousness centers of our brain. Thus it would be arbitrary to say that we are unconscious while asleep and conscious while awake; rather, we are conscious and unconscious in both conditions (though not at the same time). I therefore recommend that we stop speaking as we normally do: we can be asleep or awake (whatever exactly these conditions amount to) and we can be conscious or unconscious, but there is no logical connection between these two dichotomies. Someone could in principle be continuously conscious during sleep by dreaming all the while, and could be continuously unconscious while awake by going into total zombie mode. Thus you could be completely conscious while asleep at night and completely unconscious while awake during the day—you could invert the normal human pattern. You could wake up to unconsciousness and go to sleep to consciousness. Your consciousness could be completely devoted to dream consciousness, while the banalities of waking life are handled by an unconscious brain system. For a possible species this might be an efficient way to apportion consciousness. Why waste your consciousness resources on fact when fantasy is so much more interesting? Sleeping is the time to let your consciousness roam free; being awake can be consigned to unconscious brain mechanisms.  [2]    

 

  [1] It is an interesting question whether there is any viable definition of “awake” that defines it more positively. The dictionary editors seem to be of the opinion that asleep is the more basic concept, with awake defined simply as its negation. This is to treat awake as what I elsewhere call an essentially negative concept. I suspect they are right to do so: our concept of wakefulness just is the concept of not being asleep.

  [2] It certainly seems as if we have more imaginative consciousness during sleep than during wakefulness, so why more of all kinds of consciousness?

Share

Evolution of Language

 

                                                Evolution of Language

 

 

Consider a hypothetical species with the following profile: they have evolved by mutation and natural selection a language of thought, an internal symbolic system of infinite scope and finite base. This they use as the medium of their thought. They have not yet, however, evolved a public language of communication, so their use of language is wholly internal. Let us suppose that this language, call it IL (internal language), is fully conscious to members of the species: the words, phrases, and sentences that comprise it pass through the consciousness of its users. It is not an unconscious language, employed by the brain, but a language that can be introspected in all its glory—rather as we can introspect the language we speak when we employ it in inner speech. Let us suppose that it is innate and universal. In addition to IL the species has a vocal signaling system V that they use to warn each other of predators or to express their emotions, but V is not a real language in the sense that IL is—just a few unstructured sounds. We can suppose that V evolved well before IL in some ancestor species and has been inherited from those ancestors. The signaling system V is a separate faculty from the language IL, both in its evolutionary origin and inner nature. V cannot express the full semantic content of IL and members of our hypothetical species don’t expect it to. The two faculties merely coexist.

            Now suppose that at a later time something novel happens: the species develops an external communicative language. This language EL is a sign language not a vocal language, and it recruits the earlier internal language of thought. It is, in fact, the externalization of IL, though it serves a different purpose—communication not cognition. The external language EL is capable of expressing all that is expressed in IL—the two languages are inter-translatable. EL is rather like a human natural language, except that it is paired with an internal language that is as accessible in its structure and lexicon as a natural language. It may be that EL will gradually diversify over time, so that there will come to be many versions of it, though all derive from IL. Notice that EL does not derive from the old signaling system V and isn’t even a vocal language; in no way does it share in the neural basis of the signaling system. This system predated both IL and EL, but those languages evolved without any reliance on it. We can say that IL was a preadaptation for EL and essential for its appearance, but the system V played no role in the origin of either. Exactly why and how IL evolved is not known, though it certainly greatly expanded cognitive power; and it is also a question why EL evolved, given that the species did perfectly well without it for thousands of years. In any case, IL came first and EL built upon it, without input from V.

We can imagine that speakers of EL might wonder whether this new capacity deserves to be called a language, since they originally applied this term to IL and take that to be the paradigm case of a language. The fact that it is public and embodied might for them count as reasons to withhold the name “language” from it, because for them a genuine language should be something interior and hidden. For them, a language is by definition a mental language not a public physical language, though they can appreciate the motivation for extending the notion to the external language. Some cautious souls might insist on putting the word in scare quotes when speaking of the external means of communication. And there may be bolder types who write books with titles like The Language of Communication or External Syntactic Structures, well aware that they are flouting linguistic convention and received opinion—for it is generally held that there is no real language but the language of thought and no syntax of anything outside the head. After all, they can introspect the language of thought within their own consciousness, and there is no doubt that a language is what it is (some skeptics maintain that we can never be certain that an external language exists, though it is apodictic that an internal language does).

This hypothetical species appears perfectly logically possible. It contrasts with another hypothetical species, which may not be logically possible, that first develops a public language and only later internalizes that language to produce inner speech; and that public language evolved from a prior signaling system like V. The former hypothetical species first develops a language of thought and then develops an external language of communication, with no contribution from its inherited signaling system; that system need never have existed in order for language to evolve. The latter hypothetical species models what many have believed about the origin of actual human language, namely that primitive vocal signaling came first and formed the basis for the evolution of sophisticated human language. But the former is also a coherent story that should be evaluated on its merits; it may, in fact, be the true theory. The question is an empirical one (though issues of logical possibility also arise); certainly we cannot just assume that the other theory is correct. It is not easy to see how we could set about answering the question, what with the remote origins of language and the difficulty of understanding thought, but there are some facts about human spoken language that are suggestive.  [1]

First, natural languages mirror thought, but they do not mirror animal signaling systems: thought has the complexity and structure of language, but signaling systems don’t. If we maintain that human languages somehow derive from primitive signaling systems, we have the problem of the poverty of the precursor: those systems just don’t have the internal structure that is present in a normal human language. But a system of inner thought, especially when coded in an internal language, has exactly the right kind and degree of structure to provide a platform for external language to develop. People tend to suppose that just because signaling systems and human languages are both vocal the one must have evolved from the other, but this is a superficial point of view—in my hypothetical species the external language is stipulated to be a sign language (visual) not a vocal language (auditory). It is not physical form that matters but constitutive structure—the formal object not its contingent physical medium.

There is a lot to say, and a lot that has been said, about these matters, but I don’t propose to delve into the evidence and arguments now; my point has been to set the issue up in a perspicuous manner by describing a stipulated hypothetical species. The question is whether that species models how things actually are (were) with humans. Is spoken language externalized symbolic thought or is it elaborated vocal signaling? Once we have accepted the prior existence of a language of thought, isn’t this the obvious place to look for the origin of spoken language? I would venture that the more advanced mammals all have fairly sophisticated thought but that their signaling systems fail to do justice to their thought processes—they can’t properly express what they think (this is why we always have to guess what dogs and cats want and think from their rather limited sounds and gestures). They thus lack what humans manifestly possess—a full-blown articulate external language. Why this should be is hard to say, but it is clearly a fact. We have IL, EL, and signaling; they have (primitive) IL and signaling. The idea that both thought and language evolved from signaling by some process of augmentation is hard to believe—like thinking that eyes might have evolved from fingernails. Of course, the linguistic behavior we observe in humans today incorporates vocal signaling, alongside the linguistic competence that derives from the internal language; but that doesn’t mean these have the same evolutionary origin or intrinsic structure. Natural languages as we find them are really hybrids of distinct systems with distinct evolutionary origins: they result from a combination of the initial language of thought, contingent embodiment in a specific sensory-motor system, and the ancient system of calls and cries that we inherited from our ancestors. These three systems are now interwoven in the phenomenon of human communication, but that doesn’t mean they don’t retain their separate identities. If I shout out the sentence “Your hair is on fire!” I exploit my vocal apparatus, my instinct to warn, and my internal competence in an abstract computational structure—all in one. But these are separate psychological systems with complex interrelations. Thus language as we use it can be both “cognitive” and “expressive”—reflecting its origins in inner thought as well as in more primitive forms of communication.

The naïve view of thought and language is that thought comes first, in the species and the child, and that we then go on to express it in spoken words. That view has been challenged, particularly by twentieth century thinkers, who invert the order of explanation: spoken words come first and from these thought develops. Thought is language internalized, instead of language being thought externalized. The naïve view seems to me to have more going for it, and my hypothetical species agrees. To them it is quite self-evident that a language of thought precedes and explains a language of communication and not vice versa.

 

Col

  [1] This is complex contested territory; I intend only to skim over the subject here. For those familiar with modern linguistics, I am siding with Chomsky on these matters: my hypothetical species closely follows the view of language he has defended, most recently in Robert C. Berwick and Noam Chomsky, Why Only Us: Language and Evolution (MIT Press, 2016).

Share

Semantic Levels

                                               

 

 

Semantic Levels

 

 

Anyone interested in language and perception will recognize that there are different levels of analysis of the phenomenon in question. In linguistics we will distinguish phonetic, syntactic, and semantic levels (possibly others). In studies of visual perception we will distinguish the conscious percept, internal computations, the image on the retina, the proximal and distal stimulus, and the object of perception. If we compare the levels that concern the external object in the two areas, we notice a striking discrepancy: we speak of a single object of reference but several objects of perception. What did I refer to with “London”? I referred to a certain city and nothing else. What did I see when I flew over London? I saw the city, but I also saw a part of the city, the surface of the city, a facet of the city. In perception we readily speak of direct and indirect objects of perception—as when we say that I directly saw an elephant’s head but only indirectly saw the whole elephant, given that only the head was visible. We have the idea that you can see one thing in virtue of seeing another (one of its parts), as you can touch an object in virtue of touching a part of it (or the surface of part of it). Thus there are multiple perceptual objects in any visual encounter, but we suppose there to be only a single object in acts of reference. When I refer to London I don’t refer to its parts, surface, or facets. In the case of vision we might start out naively speaking of “seeing London”, but we quickly recognize that there is complexity here and that the objects of seeing are many—hence the talk of direct and indirect objects of perception. But we don’t think this way about reference: reference singles out a whole object not its parts of facets—hence there is no talk of direct and indirect reference analogous to the case of perception.

            We do suppose that the semantic level can be broken up into parts. In addition to the reference there is the sense, and sense itself can be broken into parts (character and content, say, or narrow and wide meaning). But we are not supposed to refer to sense—we express it. There is no dual reference, though the semantic level is composed of two sublevels. We might wonder whether the level of sense has all the complexity of the corresponding level in perception, i.e. the visual mode of presentation of the object. The latter divides into a central focal part and a peripheral part, but no one ever says that senses can be clearer at the center than at the periphery or that senses present many objects simultaneously. Senses seem simpler than percepts, as references seem simpler than perceptual objects. There is more structure in the one case than in the other. In particular, the semantic level concerned with reference to external objects is conceived as one-dimensional: we only refer to one thing at a time. Thus a theory of reference need only assign to words a single reference: the name “Saul Kripke” refers to Saul Kripke and to nothing else—not to his parts, surface, or facets. But the same is not true of seeing the man—here there are many objects of perception to be reckoned with. The obvious question is why the difference.

            Frege invoked the concept of an aspect in his classic discussion of sense and reference. The sense is said to contain an aspect–it presents or reveals or encodes an aspect. An aspect is an objective feature of the reference—something like the ensemble of properties apparent from a particular perspective. Objects can be seen from different perspectives and thus different aspects of the object are presented. This is what accounts for differences of sense (in central cases). A given object can present many aspects and sense can incorporate any of these. Now if we ask what relation a speaker has to an aspect, the answer will be that the aspect is presented to him, or perhaps that he apprehends the aspect. Maybe we can say that the name connotes an aspect—it somehow alludes to one. The aspect could be directly referred to, as in “the way the moon looks right now”: we can refer to the properties presented to us. The aspect belongs on the side of the world, along with the referent, not in the speaker’s mind. It is some kind of intentional object, in Brentano’s sense. So why not say that the aspect is referred to by the name? The name’s meaning identifies the aspect, alludes to it, specifies it—so doesn’t it refer to it? The sense of the name presents an object, but it also presents an aspect of the object; indeed, it presents the former by presenting the latter. I can be said to see an object and also to see an aspect of it, so why can’t I be said to refer to an object and also refer to an aspect of it? There is a kind of double denotation at work: the object and an aspect of the object are made objects of reference (intentionality). Even if the “folk theory” of reference doesn’t speak explicitly of this double denotation, closer analysis has revealed that there is more than just name and object; there is the aspect-presenting sense. So shouldn’t we revise the folk theory to take into account this further level of semantic structure? Wouldn’t that be good science?

            We can call the object itself the “secondary reference” and the aspect the “primary reference”. I choose “primary” for the aspect because it is in virtue of referring to the aspect that the object is referred to, just as in the case of perception. If we think of the aspect as captured by a definite description, then the aspect is clearly primary, since the description contains it en route to picking out an object. There is nothing to prevent us talking this way and it clarifies the structure of the referential semantic level. The aspect exists objectively alongside the object, not in the speaker’s mind, and our words (according to Frege) pick the aspect out; saying there is “reference” to the aspect is a small step. Thus “the Morning Star” denotes both Venus and the aspect of it presented in the morning. The proposition expressed by sentences containing the name will thus include both the object and the aspect. This allows us to explain the difference between “the Morning Star” and “the Evening Star” at the level of reference: different aspects referred to. A direct reference theory therefore permits a solution to Frege’s puzzle. No individual concepts need to be introduced or anything of a psychological nature; we just need to allow that reference can function like perception. We can invoke the apparatus of direct and indirect intentionality: I see an object by seeing an aspect of it, and I refer to an object by referring to an aspect of it. Whether we talk that way in our folk theory of reference is beside the point; the structure is there and needs to be articulated.  There may be possible perceivers who speak of perception in the simple way, as if there is nothing involved but the perceiver and the object, forgetting about perspective; but they would be wrong to insist that the object is the only thing that is perceived. We see objects by seeing aspects of them (surfaces of parts, roughly). It is the same with reference: there are nested levels of reference. Frege convinced us to accept that words can mean two kinds of thing—reference or sense—and now we should accept that words can refer to two kinds of thing—primary reference and secondary reference. This is scientific progress, though it may seem counterintuitive at first, as Frege’s theory of sense and reference did (e.g. to Russell).

            Suppose that a group of speakers came to the subject of semantics already equipped with Frege’s apparatus—they know all about aspects and objects. They refer explicitly to aspects all the time and are well aware of their role in determining the reference of names. They might introduce names by linking them to expressions that denote aspects: “Let ‘Hesperus’ be the name of the planet with that aspect”. They might even stipulate that “Hesperus” is to mean “the planet with that aspect”. Then the sense will include a specific reference to an aspect, so that we can readily speak of the primary and secondary reference of the name. These speakers have never entertained a single-reference folk semantic theory but have always allowed for double denotation. Shouldn’t we follow their example now that we have clearly discerned the semantic structure of object and aspect? Given Frege’s analysis, it is simply true that sentences containing names pick out both objects and aspects, and what point is there in denying that this “picking out” is the same as reference? When we speak of “the” denotation of a name we should really mean the pair of object and aspect. A theory of reference for names will accordingly assign both sorts of entity to a name. A Millian about names can agree with this double assignment, because the object itself enters the meaning of the name; but he can also solve Frege’s puzzle by appealing to the (implicit) reference to an aspect. We don’t need to bring in individual concepts or some such psychological thing; all the work can be done at the level of objective reference. It is just that we refer to more things than we realized. A name can have sense and references.

            The theory of reference therefore divides into two theories: object reference and aspect reference. How do these types of reference work? Should we have a causal theory for both, and how will the theories be related? Or should we have a description theory for both? We can see how aspects might figure in a description theory of object reference, but what about reference to aspects—is this mediated by description too? Can aspects have further aspects that figure in a theory of reference for them? Won’t that lead to an infinite regress? Is it possible that the basic theory will apply to aspect reference, with object reference carried by a description like “the object with aspect A”? Will the two theories be independent in the sense that neither determines the other, because different objects can instantiate the same aspect and different aspects can belong to the same object? Are aspects primarily referred to by means of demonstratives, so that object reference is carried by something of the form “the object with that aspect”? When a baby is baptized is the reference of the name fixed by “the human being with the babyish aspect now before us”? Should we say that we know aspects by acquaintance and objects only by description? What about the radical idea that all real reference is to aspects, with objects not strictly referred to at all? Frege spoke of the sense as “illuminating only a single aspect of the reference”, but should we infer that the sense doesn’t illuminate the reference as such? Is sense a conduit for aspects not objects, with the latter coming along for the ride? Is aspect reference where all the semantic action is? Maybe the object comes into the picture because we need it to account for truth conditions, but the basic semantic work goes into aspect reference. Senses tell us a lot about aspects, but relatively little about the objects that have the aspects. If we knew all about reference to aspects, what further task would the theory of reference have? All we would need to add is that the object of reference is simply the one that has the aspect already referred to. The theory of reference might be mainly a theory of aspect reference—as with the theory of perception. In any case, double denotation calls for a doubling of theory.

 

Colin McGinn 

 

 

 

Share

Philosophy and Form

 

 

Philosophy and Form

 

 

Philosophy is versatile as to form. We have the long-form book or monograph, the medium length article, the note or comment, and the epigram. There is also the dialogue form (Plato, Hume, Berkeley). Philosophy can be written as poetry (Lucretius, Eliot, Donne) or performed in a play (Shaw, Beckett, Stoppard). There are philosophical novels (Sartre, Camus, Murdoch). There are films with philosophical themes (The Matrix, Inception, Woody Allen). There is philosophical painting (Magritte, de Chirico, Escher). Music, architecture, and dance can have philosophical themes.  There is even philosophical comedy (Monty Python, Beyond the Fringe). I can’t think of another subject that is represented in so many different forms. Physics isn’t, despite its prestige and importance; nor is history or economics. Philosophical ideas appear in great expressive variety, whereas the ideas of other disciplines are more expressively limited. Why is that?

            It’s not because philosophy is easier or more accessible. It’s not because it has greater practical impact. It’s not because there are more philosophers than other types of savant. I think it’s because everyone is a philosopher and philosophy gets into everything. We just can’t help it: philosophy is in our blood. Not everyone studies it formally, but philosophical thoughts exist in everyone. They keep cropping up, inescapably. So they get expressed in many forms. Why it is that we are so philosophical a species is an interesting question: it is hardly practically pressing or even much fun. Couldn’t there be a species very like us but without a philosophical bone in their body? But with us it is natural and ubiquitous, so it spills out all over the place.

            In addition philosophy lends itself to a multiplicity of forms: it is essentially malleable. It is not that when someone has a philosophical thought the form of its expression is immediately evident—it might lead to a poem or a painting or a story or discursive prose. This is partly because philosophy is an emotional subject. It’s hard to see how a thought in physics could be so open-ended in its form of expression. Conversation is a natural mode of philosophical expression, which is why the dialogue form is appropriate; but the same is not true of other disciplines. Conversations express our concerns and reveal our uncertainties—they are a human activity. Philosophy is a humanistic discipline in that sense, an expression of our human nature. Hence it takes the form of human modes of expression. Perhaps, too, there is a need to express oneself philosophically—in conversation, writing, art, and so on. The variety of forms reflects the desire to find expression for one’s philosophical self. One doesn’t express one’s “physical self” in acts of expressive physics—one simply aims gets the point across in equations and verbal formulations. But philosophical expression is far more a form of self-expression in its deep origins: this is why we speak of “my philosophy” (but not “my physics” or “my economics”). Philosophy is personal. And it seeks expression where it can.

            This suggests that the current style of academic philosophy is distorting and limiting. Academic philosophy today is almost exclusively confined to a small number of forms—chiefly the article and the book. These are written in a certain professional style (which I need not characterize, but not exactly scintillating). But philosophy itself, as a human passion or obsession, is not inherently tied to this type of form. Institutional norms have determined the dominant philosophical forms today, not the living essence of the subject. I am not suggesting that we abandon well-reasoned philosophical prose in favor of poetry and prancing around, but I do think we should recognize the variety of forms that are natural to philosophy. We shouldn’t suppose that the way we write philosophy within the academy today is the only acceptable way to do it. That diminishes our ability to appeal to a wider audience and doesn’t do justice to what philosophy deeply is. In trying to compete with other disciplines within the academy on their terms, we have forgotten that philosophy can be clothed in many forms.

 

Coli

Share

Knowledge By Necessity

                                               

 

 

 

Knowledge By Necessity

 

 

We can know that a proposition is true and we can know that a proposition is necessary, but can we know that a proposition is true by knowing that it is necessary? Consider a simple tautology like “Hesperus = Hesperus”: don’t you know this is true by seeing that it is necessary? If someone asks you why you think it is true, you will answer, “It couldn’t be otherwise, so it has to be true” or words to that effect. The sentence is clearly necessary, so you can infer that it must be true. You treat the modal proposition as a premise to derive the non-modal proposition. The former proposition acts as the ground of your knowledge of the latter proposition. You can tell just from the form of the proposition that it must be true, and thus it is true. You derive an “is” from a “must”. You really can’t help seeing that the sentence expresses a necessity, given that you grasp its meaning, and truth trivially follows. We can call this “necessity-based” knowledge: knowledge that results from, or is bound up with, modal knowledge. How else could you know the proposition to be true—not by empirical observation, surely? You know it by analysis of meaning: the meaning is such as to make the sentence necessary. The sentence has to be true in all possible worlds, given its meaning, and so it is true in the actual world—truth is a consequence of necessity. It is immediately obvious to you that the sentence is necessary—and so it must also be true. If someone couldn’t see that “Hesperus = Hesperus” is necessary, you would wonder whether he had understood it right. Maybe someone could fail to see that necessity entails truth and hence not draw the inference; but how could he fail to see that “Hesperus = Hesperus” is a trivial tautology, in contrast to “Hesperus = Phosphorous”? The sentence is self-evidently a necessary truth.

            It thus appears that some knowledge of truth is necessity-based: knowledge of the truth involves knowledge of the necessity, with the latter acting as a premise. Sometimes people believe things to be true because they perceive them to be necessary. You know very well that Hesperus is necessarily identical to Hesperus—how could anybody not?—and so you are entitled to believe that “Hesperus is Hesperus” is true. For analytic truth generally the same epistemic situation obtains: you can see the sentence has to be true given what it means, so it follows that it is true. Even if the move from necessity to truth is not valid in every case (e.g. ethical sentences), it is in some cases. We can thus derive non-modal knowledge from modal knowledge. But clearly not all knowledge is like this—mostly you can’t come to know truths by perceiving necessities. You can’t come to know the truth of “Hesperus = Phosphorus” that way: here you have to investigate the empirical world. The sentence is necessary, but you can’t use this necessity to decide that the sentence is true. You may know that if it is true then it is a necessary truth, but you don’t know that it is true just by understanding it, so you can’t use its necessity as a premise in arguing that it is true. You need to appeal to observation to show that the sentence is true—as you do for any other empirical proposition. Here your knowledge is observation-based not necessity-based—observable facts about planetary motions not the analysis of meaning. You won’t cite tautology as a reason for truth in this case, but you will in the other case. You won’t argue that there is no alternative to being true for “Hesperus = Phosphorus”. Clearly you can’t argue that “Hesperus = Phosphorus” follows from “Possibly Hesperus = Phosphorus”, but that is the only modal truth you have at your disposal in your current state of knowledge, unlike the case of “Hesperus is Hesperus”. So you can’t take a short cut to knowledge of truth by relying on an evident necessity—you have to resort to arduous empirical investigation. You may wish you knew that the sentence is necessary, so as to spare yourself the epistemic effort, but that is precisely the knowledge you lack in this case, since the expressed proposition in question refuses to disclose this fact. We resort to observation when our modal sense cannot detect necessity, which is most of the time. Necessity-based knowledge is quick and easy, unlike the other kind.

            I have been leading up to the following thesis: a priori knowledge is knowledge by necessity while a posteriori knowledge is not knowledge by necessity. Here we define the a priori positively and the a posteriori negatively, unlike the traditional definition in terms of knowledge by experience versus knowledge not by experience. This gives us a result for which we have pined: a positive account of the nature of a priori knowledge. The two definitions map onto each other in an obvious way: knowledge by necessity is not knowledge by experience, and knowledge by experience is not knowledge by necessity. That is, we don’t come to know necessities by experiencing them, and necessities are no use to us in the acquisition of empirical knowledge. Necessity plays a role in acquiring a priori knowledge, but it plays no role in acquiring a posteriori knowledge. To have a crisp formulation, I shall say that a priori knowledge is “by necessity” and a posteriori knowledge is “by causality”—assuming a broadly causal account of perception and empirical knowledge. We can also say that a priori knowledge is knowledge grounded in our modal faculty, while a posteriori knowledge is knowledge grounded in perception and inference—thus comparing different epistemic faculties. But I think it is illuminating to keep the simpler formulation in mind, because it directs our attention to the metaphysics of the matter: modality in one case and causality in the other. The world causally impinges on us and we thereby form knowledge of it, and it also presents us with necessities that don’t act as causes—thus we obtain two very different kinds of knowledge. The mechanismis quite different in the two cases—the process, the structure.

            Is the thesis true? This is a big question and I shall have to be brief and dogmatic. There are two sorts of case to consider: a priori necessities and a priori contingencies. I started with an example of a simple tautology because here the necessity is inescapable—you can’t help recognizing it. Hence knowledge of necessity is guaranteed, part of elementary linguistic understanding. But not all a priori knowledge is like that, though tautology has some claim to be a paradigm of the a priori. What about arithmetical knowledge? If it is synthetic a priori, then we can’t say that knowledge of mathematical necessity results from linguistic analysis alone. Nevertheless, it is plausible that we do appreciate that all mathematical truths are necessary; we know that this is how mathematical reality is generally. When we come to know that a mathematical proposition is true we thereby grasp its necessity: a proof demonstrates this necessity. Mathematics is arguably more about necessity than about truth: we can doubt that mathematical sentences express truths (we might be mathematical fictionalists), but we don’t doubt that mathematics cannot be otherwise—it has some sort of inevitability. We might decide that mathematical sentences have only assertion conditions, never truth conditions, but we won’t abandon the idea that some sort of necessity clings to them (though we may be deflationary about that necessity). Modal intuition suffuses our understanding of mathematics, and this can function in the production of mathematical knowledge. I see that 3 plus 5 has to be 8, so I accept that 3 plus 5 is 8. Mathematical facts are inescapable, fixed for all time, so mathematical truths are bound to be true: I appreciate the necessity, so I accept the truth. The epistemology of mathematics is essentially modal and this plays a role in the formation of mathematical beliefs: in knowing necessities we know truths—and that is the mark of the a priori.  [1]

            Much the same can be said of logic, narrowly or widely construed. You cannot fail to register the necessity of a logical law, and you believe the law because you grasp its necessity. Nothing could be both F and not-F, and so nothing is. The necessity stares you in the face, as clear as daylight, and because of this you come to know the law—the knowledge is by necessity. Accordingly, it is a priori. It isn’t that you can believe in the truth of the law and remain agnostic about its modal status (“I believe that nothing is both F and not-F, but I’ve never thought about whether this is necessary or contingent”). Your belief in the law is bound up with your belief in its necessity; thus logical knowledge is a priori according to the proposed definition. The same goes for such propositions as a priori truths about colors: “Red is closer to orange than blue”, “There is no transparent white”, etc. Here again the necessity is what stands out: we know these propositions to be true because we perceive their necessity—not because we have conducted an empirical investigation of colors. Accordingly, they are a priori. In all the cases of the a priori in which the proposition is necessary this necessity plays an epistemic role in accepting the proposition; it is not something that lies outside of the epistemic process. It is not something that is irrelevant to why we accept the proposition. We recognize the necessity and that recognition is what leads us to accept the proposition. If we accepted the proposition for other reasons (testimony, overall fit with empirical science), then our knowledge would be a posteriori; but granted that our acceptance is necessity-based the knowledge is a priori. Being known a priori is being known by necessity: the involvement of modal judgment is what defines the category.  [2] By contrast, a posteriori knowledge does not involve modal judgment—you could achieve it and have no modal faculty at all. The basis of your knowledge is not any kind of modal insight, but observation and inference (induction, hypothesis formation, inference to the best explanation). You don’t have modal reasons for believing that the earth revolves around the sun, but you do have modal reasons for believing that red is closer to orange than blue—viz. that it couldn’t be otherwise. Since things couldn’t be otherwise, they must be as stated, and so what is stated must be true. The modal reasoning is not a mere add-on to knowledge of a priori truths but integral to it.

            It may be thought that the contingent a priori will scuttle the necessity theory, since the proposition known is not even a necessary truth; but actually it is not difficult to accommodate these cases with a little ingenuity. One line we could take is just to deny the contingent a priori, and good grounds can be given for that; but it is more illuminating to see how we could extend the necessity theory to cover such cases. Three examples may be given: reference fixing, the Cogito, and certain indexical statements. What we need to know is whether there are necessities that figure as premises in these cases, even if these necessities are not identical to the conclusion drawn from them. Thus in the case of fixing the reference of a name by means of a description (e.g. the meter rod case) we can say the following: “No matter what the length of this rod may be the name ‘one meter’ will designate it”. If I fix the reference of a name “a” by “the F”, then no matter which object is denoted by that description it will be named “a”. This doesn’t imply that the object named is necessarily F; it says merely that the name I introduce is necessarily tied in its reference to the description I link it to. Because we recognize this necessity we can infer that a is the F(no matter who or what the F is). We don’t need to undertake any empirical investigation to know that a is the Fsince it follows merely from the act of linguistic stipulation—and that act embodies a necessary truth (“the person designated by ‘a’ is necessarily the person designated by ‘the F’”).

In the case of the Cogito it is true that the conclusion is not a necessary truth (since I don’t necessarily exist), but there is a necessary truth lurking behind this proposition, namely “Necessarily anyone who thinks exists”. It is a necessary truth that thinking implies existence (according to the Cogito), but it is not a necessary truth that the individual thinker exists—he might not have existed. I know that I exist because I know that I think and I know that anything that thinks necessarily exists. Thus I use a modal premise to infer a non-modal conclusion: from “Necessarily anything that thinks exists” to “I exist”. That is my ground for believing in my existence, according to the Cogito, and it is a necessary truth. Thus the knowledge derived is a priori, according to the definition. I don’t make empirical observations of myself to determine whether I exist; I rely on a necessary truth about thought and existence, namely that you can’t think without existing. I know that I exist (contingent truth) based on the premise that anything that thinks exists (necessary truth), so my knowledge essentially involves the recognition of a necessity.

Thirdly, we have “I am here now”: this expresses a contingent truth whenever uttered but is generally held to be a priori. I know a priori that I am here now, but it is contingent that I am here now. But again there is a necessary truth in the offing, namely: “Anyone who utters the words ‘I am here now’ says something true”. By knowing this necessary truth I know that I must be speaking the truth when I utter those words, but my utterance expresses a contingent truth. So I rely on a necessary truth to ground my belief in a contingent truth. Without that necessary truth I would not know what I know, i.e. that my current utterance of “I am here now” is true. Again, the case comes out as a priori according to the definition; we just have to recognize that the modal premise need not coincide with the conclusion. We can have a priori knowledge of a contingent truth by inferring it from a distinct necessary truth. So we have found no counterexamples to the thesis that all a priori knowledge is knowledge by necessity.

            I have assumed so far that the type of necessity at issue is metaphysical necessity, not epistemic necessity. This is the kind of necessity we recognize when we come to know something a priori. But we could formulate the main claims of this essay using the concept of epistemic necessity. For simplicity, just think of this as certainty, construed as a normative not a psychological concept—not what people are actually certain of but what they ought to be certain of. Then we could say that when I am presented with a tautology I recognize that it is certain and infer from this that it must be true, and similarly for other cases of a priori knowledge. This approach converges with the account based on metaphysical necessity, because certainty and necessity correlate (more or less) in cases of a priori truth. But I prefer the metaphysical formulation because it connects an epistemic notion with a metaphysical notion—a priori knowledge with objective necessity. When I know something a priori I know it by recognizing the objective trait of necessity not a psychological trait of certainty (however normatively grounded). Thus the epistemological distinction has a metaphysical correlate or counterpart. To know something a priori is to know it by detecting an objective fact of necessity, though we may also be certain of what we thereby know. In contrast, to know something a posteriori is not to know it by necessity detection but by perception and inference (by causality). This is a deep and sharp distinction, and it at no point relies on a purely negative characterization of what we are trying to define. We really do know things in two radically different ways: by apprehending necessity or by registering causality.            

 

 

  [1] Perhaps part of the attraction of the view that mathematics consists of tautologies is that it comports with the idea that our knowledge of mathematics involves knowledge of necessities. The necessities occupy the epistemic foreground.

  [2] Given this account of a priori knowledge, it is doubtful that animals have it, because they lack modal sensitivity—they don’t perceive that propositions are necessary. If you present an animal with a tautology, it will stare at you blankly. They may have innate knowledge, but they don’t have a priori knowledge. Not even the most intelligent ape has ever thought that water is necessarily H2O or that the origin of an ape is an essential property of it. Animals have no knowledge of metaphysical necessity. This explains their lack of a priori knowledge.

Share