Consciousness and Sleep

                                               

 

Consciousness and Sleep

 

 

Why do we say that a person is unconscious when asleep but conscious when awake? During sleep we often dream and dreaming is a conscious activity of mind, so why don’t we say that we are often conscious while sleep? Why do we speak as if sleeping and consciousness are opposed? Similarly, why do we suppose that being awake rules out being unconscious? It doesn’t rule out the existence of unconscious mental activity, so why should it rule out being simply unconscious—no conscious mental activity at all? Couldn’t a person be awake in the normal sense—mobile, eyes open, and alert—and yet there is a break in consciousness, a gap in the stream? Couldn’t wakefulness be punctuated by periods of unconsciousness, as sleep is punctuated by periods of consciousness? Couldn’t someone go into blindsight mode every once in a while and still be said to be awake?

            Part of the problem is defining what it is to be awake or asleep. The OED defines “sleep” as follows: “a regularly occurring condition of body and mind in which the nervous system is inactive, the eyes closed, the postural muscles relaxed, and consciousness practically suspended”; while “awake” is defined simply as “not asleep”.  [1] But couldn’t these conditions for sleep be met and yet the person (or animal) is still awake? What if I relax on my bed with my eyes closed and think only of sheep—aren’t I still awake? What if I take a drug that cuts off sense perception, paralyzes me, and leaves me with only the most minimal consciousness—could I not still be awake?  And what about the clause about consciousness being “practically suspended”? It doesn’t seem at all suspended during an intense nightmare or an ardent erotic dream. I suspect the editors mean “consciousness of the environment”, but that is neither necessary nor sufficient for being asleep: you can be asleep and aware of outside stimuli to some degree (hence the qualification “practically”), and you can be wide awake while having no awareness of the environment but only of yourself. Clearly, we have poor definitions of what sleep is, and likewise wakefulness. We don’t have a good analysis of what these states involve.

            The point remains that the logical connections between “asleep” and “unconscious” and “awake” and “conscious” are opaque at best. There seem in fact to be no logical connections here: we can experience states of consciousness during sleep, i.e. dreams, and we could go consciously blank while awake, i.e. not asleep. Being asleep is roughly a state involving being relatively unreceptive to the environment, but that doesn’t preclude a vivid inner consciousness such as occurs in dreams. Being awake is roughly being alert to the environment, immersed in it, but that doesn’t preclude moments of conscious blankness, as in blindsight. We say of simple organisms such as insects that they are awake, i.e. not asleep, but we don’t take this to entail that they are sentient beings. So there appears to be no analytic or necessary link between the awake/asleep distinction and the conscious/unconscious distinction. Indeed we could invert our usage and incur no charge of logical infelicity: we could say that we are conscious while asleep (though not all the time) and that we are unconscious while awake (though not all the time)—that is, we could say the latter if we actually had periods of conscious blankness while awake. Just as we are sometimes conscious during sleep, so that sleep is not necessarily a time of unconsciousness, so we could sometimes be mere zombies while awake, so that being awake is not necessarily a time of (uninterrupted) consciousness. You can be lying down in the dark fast asleep and have conscious experiences inwardly, and you could be moving around the world completely awake (“not asleep”) and be experiencing nothing consciously at all—you are in your robot phase. As things are, we are (apparently) conscious during the entirety of wakefulness—though there might be brief unconscious interstices—but we could be only intermittently conscious, perhaps so as to relieve the consciousness centers of our brain. Thus it would be arbitrary to say that we are unconscious while asleep and conscious while awake; rather, we are conscious and unconscious in both conditions (though not at the same time). I therefore recommend that we stop speaking as we normally do: we can be asleep or awake (whatever exactly these conditions amount to) and we can be conscious or unconscious, but there is no logical connection between these two dichotomies. Someone could in principle be continuously conscious during sleep by dreaming all the while, and could be continuously unconscious while awake by going into total zombie mode. Thus you could be completely conscious while asleep at night and completely unconscious while awake during the day—you could invert the normal human pattern. You could wake up to unconsciousness and go to sleep to consciousness. Your consciousness could be completely devoted to dream consciousness, while the banalities of waking life are handled by an unconscious brain system. For a possible species this might be an efficient way to apportion consciousness. Why waste your consciousness resources on fact when fantasy is so much more interesting? Sleeping is the time to let your consciousness roam free; being awake can be consigned to unconscious brain mechanisms.  [2]    

 

  [1] It is an interesting question whether there is any viable definition of “awake” that defines it more positively. The dictionary editors seem to be of the opinion that asleep is the more basic concept, with awake defined simply as its negation. This is to treat awake as what I elsewhere call an essentially negative concept. I suspect they are right to do so: our concept of wakefulness just is the concept of not being asleep.

  [2] It certainly seems as if we have more imaginative consciousness during sleep than during wakefulness, so why more of all kinds of consciousness?

Share

Evolution of Language

 

                                                Evolution of Language

 

 

Consider a hypothetical species with the following profile: they have evolved by mutation and natural selection a language of thought, an internal symbolic system of infinite scope and finite base. This they use as the medium of their thought. They have not yet, however, evolved a public language of communication, so their use of language is wholly internal. Let us suppose that this language, call it IL (internal language), is fully conscious to members of the species: the words, phrases, and sentences that comprise it pass through the consciousness of its users. It is not an unconscious language, employed by the brain, but a language that can be introspected in all its glory—rather as we can introspect the language we speak when we employ it in inner speech. Let us suppose that it is innate and universal. In addition to IL the species has a vocal signaling system V that they use to warn each other of predators or to express their emotions, but V is not a real language in the sense that IL is—just a few unstructured sounds. We can suppose that V evolved well before IL in some ancestor species and has been inherited from those ancestors. The signaling system V is a separate faculty from the language IL, both in its evolutionary origin and inner nature. V cannot express the full semantic content of IL and members of our hypothetical species don’t expect it to. The two faculties merely coexist.

            Now suppose that at a later time something novel happens: the species develops an external communicative language. This language EL is a sign language not a vocal language, and it recruits the earlier internal language of thought. It is, in fact, the externalization of IL, though it serves a different purpose—communication not cognition. The external language EL is capable of expressing all that is expressed in IL—the two languages are inter-translatable. EL is rather like a human natural language, except that it is paired with an internal language that is as accessible in its structure and lexicon as a natural language. It may be that EL will gradually diversify over time, so that there will come to be many versions of it, though all derive from IL. Notice that EL does not derive from the old signaling system V and isn’t even a vocal language; in no way does it share in the neural basis of the signaling system. This system predated both IL and EL, but those languages evolved without any reliance on it. We can say that IL was a preadaptation for EL and essential for its appearance, but the system V played no role in the origin of either. Exactly why and how IL evolved is not known, though it certainly greatly expanded cognitive power; and it is also a question why EL evolved, given that the species did perfectly well without it for thousands of years. In any case, IL came first and EL built upon it, without input from V.

We can imagine that speakers of EL might wonder whether this new capacity deserves to be called a language, since they originally applied this term to IL and take that to be the paradigm case of a language. The fact that it is public and embodied might for them count as reasons to withhold the name “language” from it, because for them a genuine language should be something interior and hidden. For them, a language is by definition a mental language not a public physical language, though they can appreciate the motivation for extending the notion to the external language. Some cautious souls might insist on putting the word in scare quotes when speaking of the external means of communication. And there may be bolder types who write books with titles like The Language of Communication or External Syntactic Structures, well aware that they are flouting linguistic convention and received opinion—for it is generally held that there is no real language but the language of thought and no syntax of anything outside the head. After all, they can introspect the language of thought within their own consciousness, and there is no doubt that a language is what it is (some skeptics maintain that we can never be certain that an external language exists, though it is apodictic that an internal language does).

This hypothetical species appears perfectly logically possible. It contrasts with another hypothetical species, which may not be logically possible, that first develops a public language and only later internalizes that language to produce inner speech; and that public language evolved from a prior signaling system like V. The former hypothetical species first develops a language of thought and then develops an external language of communication, with no contribution from its inherited signaling system; that system need never have existed in order for language to evolve. The latter hypothetical species models what many have believed about the origin of actual human language, namely that primitive vocal signaling came first and formed the basis for the evolution of sophisticated human language. But the former is also a coherent story that should be evaluated on its merits; it may, in fact, be the true theory. The question is an empirical one (though issues of logical possibility also arise); certainly we cannot just assume that the other theory is correct. It is not easy to see how we could set about answering the question, what with the remote origins of language and the difficulty of understanding thought, but there are some facts about human spoken language that are suggestive.  [1]

First, natural languages mirror thought, but they do not mirror animal signaling systems: thought has the complexity and structure of language, but signaling systems don’t. If we maintain that human languages somehow derive from primitive signaling systems, we have the problem of the poverty of the precursor: those systems just don’t have the internal structure that is present in a normal human language. But a system of inner thought, especially when coded in an internal language, has exactly the right kind and degree of structure to provide a platform for external language to develop. People tend to suppose that just because signaling systems and human languages are both vocal the one must have evolved from the other, but this is a superficial point of view—in my hypothetical species the external language is stipulated to be a sign language (visual) not a vocal language (auditory). It is not physical form that matters but constitutive structure—the formal object not its contingent physical medium.

There is a lot to say, and a lot that has been said, about these matters, but I don’t propose to delve into the evidence and arguments now; my point has been to set the issue up in a perspicuous manner by describing a stipulated hypothetical species. The question is whether that species models how things actually are (were) with humans. Is spoken language externalized symbolic thought or is it elaborated vocal signaling? Once we have accepted the prior existence of a language of thought, isn’t this the obvious place to look for the origin of spoken language? I would venture that the more advanced mammals all have fairly sophisticated thought but that their signaling systems fail to do justice to their thought processes—they can’t properly express what they think (this is why we always have to guess what dogs and cats want and think from their rather limited sounds and gestures). They thus lack what humans manifestly possess—a full-blown articulate external language. Why this should be is hard to say, but it is clearly a fact. We have IL, EL, and signaling; they have (primitive) IL and signaling. The idea that both thought and language evolved from signaling by some process of augmentation is hard to believe—like thinking that eyes might have evolved from fingernails. Of course, the linguistic behavior we observe in humans today incorporates vocal signaling, alongside the linguistic competence that derives from the internal language; but that doesn’t mean these have the same evolutionary origin or intrinsic structure. Natural languages as we find them are really hybrids of distinct systems with distinct evolutionary origins: they result from a combination of the initial language of thought, contingent embodiment in a specific sensory-motor system, and the ancient system of calls and cries that we inherited from our ancestors. These three systems are now interwoven in the phenomenon of human communication, but that doesn’t mean they don’t retain their separate identities. If I shout out the sentence “Your hair is on fire!” I exploit my vocal apparatus, my instinct to warn, and my internal competence in an abstract computational structure—all in one. But these are separate psychological systems with complex interrelations. Thus language as we use it can be both “cognitive” and “expressive”—reflecting its origins in inner thought as well as in more primitive forms of communication.

The naïve view of thought and language is that thought comes first, in the species and the child, and that we then go on to express it in spoken words. That view has been challenged, particularly by twentieth century thinkers, who invert the order of explanation: spoken words come first and from these thought develops. Thought is language internalized, instead of language being thought externalized. The naïve view seems to me to have more going for it, and my hypothetical species agrees. To them it is quite self-evident that a language of thought precedes and explains a language of communication and not vice versa.

 

Col

  [1] This is complex contested territory; I intend only to skim over the subject here. For those familiar with modern linguistics, I am siding with Chomsky on these matters: my hypothetical species closely follows the view of language he has defended, most recently in Robert C. Berwick and Noam Chomsky, Why Only Us: Language and Evolution (MIT Press, 2016).

Share

Semantic Levels

                                               

 

 

Semantic Levels

 

 

Anyone interested in language and perception will recognize that there are different levels of analysis of the phenomenon in question. In linguistics we will distinguish phonetic, syntactic, and semantic levels (possibly others). In studies of visual perception we will distinguish the conscious percept, internal computations, the image on the retina, the proximal and distal stimulus, and the object of perception. If we compare the levels that concern the external object in the two areas, we notice a striking discrepancy: we speak of a single object of reference but several objects of perception. What did I refer to with “London”? I referred to a certain city and nothing else. What did I see when I flew over London? I saw the city, but I also saw a part of the city, the surface of the city, a facet of the city. In perception we readily speak of direct and indirect objects of perception—as when we say that I directly saw an elephant’s head but only indirectly saw the whole elephant, given that only the head was visible. We have the idea that you can see one thing in virtue of seeing another (one of its parts), as you can touch an object in virtue of touching a part of it (or the surface of part of it). Thus there are multiple perceptual objects in any visual encounter, but we suppose there to be only a single object in acts of reference. When I refer to London I don’t refer to its parts, surface, or facets. In the case of vision we might start out naively speaking of “seeing London”, but we quickly recognize that there is complexity here and that the objects of seeing are many—hence the talk of direct and indirect objects of perception. But we don’t think this way about reference: reference singles out a whole object not its parts of facets—hence there is no talk of direct and indirect reference analogous to the case of perception.

            We do suppose that the semantic level can be broken up into parts. In addition to the reference there is the sense, and sense itself can be broken into parts (character and content, say, or narrow and wide meaning). But we are not supposed to refer to sense—we express it. There is no dual reference, though the semantic level is composed of two sublevels. We might wonder whether the level of sense has all the complexity of the corresponding level in perception, i.e. the visual mode of presentation of the object. The latter divides into a central focal part and a peripheral part, but no one ever says that senses can be clearer at the center than at the periphery or that senses present many objects simultaneously. Senses seem simpler than percepts, as references seem simpler than perceptual objects. There is more structure in the one case than in the other. In particular, the semantic level concerned with reference to external objects is conceived as one-dimensional: we only refer to one thing at a time. Thus a theory of reference need only assign to words a single reference: the name “Saul Kripke” refers to Saul Kripke and to nothing else—not to his parts, surface, or facets. But the same is not true of seeing the man—here there are many objects of perception to be reckoned with. The obvious question is why the difference.

            Frege invoked the concept of an aspect in his classic discussion of sense and reference. The sense is said to contain an aspect–it presents or reveals or encodes an aspect. An aspect is an objective feature of the reference—something like the ensemble of properties apparent from a particular perspective. Objects can be seen from different perspectives and thus different aspects of the object are presented. This is what accounts for differences of sense (in central cases). A given object can present many aspects and sense can incorporate any of these. Now if we ask what relation a speaker has to an aspect, the answer will be that the aspect is presented to him, or perhaps that he apprehends the aspect. Maybe we can say that the name connotes an aspect—it somehow alludes to one. The aspect could be directly referred to, as in “the way the moon looks right now”: we can refer to the properties presented to us. The aspect belongs on the side of the world, along with the referent, not in the speaker’s mind. It is some kind of intentional object, in Brentano’s sense. So why not say that the aspect is referred to by the name? The name’s meaning identifies the aspect, alludes to it, specifies it—so doesn’t it refer to it? The sense of the name presents an object, but it also presents an aspect of the object; indeed, it presents the former by presenting the latter. I can be said to see an object and also to see an aspect of it, so why can’t I be said to refer to an object and also refer to an aspect of it? There is a kind of double denotation at work: the object and an aspect of the object are made objects of reference (intentionality). Even if the “folk theory” of reference doesn’t speak explicitly of this double denotation, closer analysis has revealed that there is more than just name and object; there is the aspect-presenting sense. So shouldn’t we revise the folk theory to take into account this further level of semantic structure? Wouldn’t that be good science?

            We can call the object itself the “secondary reference” and the aspect the “primary reference”. I choose “primary” for the aspect because it is in virtue of referring to the aspect that the object is referred to, just as in the case of perception. If we think of the aspect as captured by a definite description, then the aspect is clearly primary, since the description contains it en route to picking out an object. There is nothing to prevent us talking this way and it clarifies the structure of the referential semantic level. The aspect exists objectively alongside the object, not in the speaker’s mind, and our words (according to Frege) pick the aspect out; saying there is “reference” to the aspect is a small step. Thus “the Morning Star” denotes both Venus and the aspect of it presented in the morning. The proposition expressed by sentences containing the name will thus include both the object and the aspect. This allows us to explain the difference between “the Morning Star” and “the Evening Star” at the level of reference: different aspects referred to. A direct reference theory therefore permits a solution to Frege’s puzzle. No individual concepts need to be introduced or anything of a psychological nature; we just need to allow that reference can function like perception. We can invoke the apparatus of direct and indirect intentionality: I see an object by seeing an aspect of it, and I refer to an object by referring to an aspect of it. Whether we talk that way in our folk theory of reference is beside the point; the structure is there and needs to be articulated.  There may be possible perceivers who speak of perception in the simple way, as if there is nothing involved but the perceiver and the object, forgetting about perspective; but they would be wrong to insist that the object is the only thing that is perceived. We see objects by seeing aspects of them (surfaces of parts, roughly). It is the same with reference: there are nested levels of reference. Frege convinced us to accept that words can mean two kinds of thing—reference or sense—and now we should accept that words can refer to two kinds of thing—primary reference and secondary reference. This is scientific progress, though it may seem counterintuitive at first, as Frege’s theory of sense and reference did (e.g. to Russell).

            Suppose that a group of speakers came to the subject of semantics already equipped with Frege’s apparatus—they know all about aspects and objects. They refer explicitly to aspects all the time and are well aware of their role in determining the reference of names. They might introduce names by linking them to expressions that denote aspects: “Let ‘Hesperus’ be the name of the planet with that aspect”. They might even stipulate that “Hesperus” is to mean “the planet with that aspect”. Then the sense will include a specific reference to an aspect, so that we can readily speak of the primary and secondary reference of the name. These speakers have never entertained a single-reference folk semantic theory but have always allowed for double denotation. Shouldn’t we follow their example now that we have clearly discerned the semantic structure of object and aspect? Given Frege’s analysis, it is simply true that sentences containing names pick out both objects and aspects, and what point is there in denying that this “picking out” is the same as reference? When we speak of “the” denotation of a name we should really mean the pair of object and aspect. A theory of reference for names will accordingly assign both sorts of entity to a name. A Millian about names can agree with this double assignment, because the object itself enters the meaning of the name; but he can also solve Frege’s puzzle by appealing to the (implicit) reference to an aspect. We don’t need to bring in individual concepts or some such psychological thing; all the work can be done at the level of objective reference. It is just that we refer to more things than we realized. A name can have sense and references.

            The theory of reference therefore divides into two theories: object reference and aspect reference. How do these types of reference work? Should we have a causal theory for both, and how will the theories be related? Or should we have a description theory for both? We can see how aspects might figure in a description theory of object reference, but what about reference to aspects—is this mediated by description too? Can aspects have further aspects that figure in a theory of reference for them? Won’t that lead to an infinite regress? Is it possible that the basic theory will apply to aspect reference, with object reference carried by a description like “the object with aspect A”? Will the two theories be independent in the sense that neither determines the other, because different objects can instantiate the same aspect and different aspects can belong to the same object? Are aspects primarily referred to by means of demonstratives, so that object reference is carried by something of the form “the object with that aspect”? When a baby is baptized is the reference of the name fixed by “the human being with the babyish aspect now before us”? Should we say that we know aspects by acquaintance and objects only by description? What about the radical idea that all real reference is to aspects, with objects not strictly referred to at all? Frege spoke of the sense as “illuminating only a single aspect of the reference”, but should we infer that the sense doesn’t illuminate the reference as such? Is sense a conduit for aspects not objects, with the latter coming along for the ride? Is aspect reference where all the semantic action is? Maybe the object comes into the picture because we need it to account for truth conditions, but the basic semantic work goes into aspect reference. Senses tell us a lot about aspects, but relatively little about the objects that have the aspects. If we knew all about reference to aspects, what further task would the theory of reference have? All we would need to add is that the object of reference is simply the one that has the aspect already referred to. The theory of reference might be mainly a theory of aspect reference—as with the theory of perception. In any case, double denotation calls for a doubling of theory.

 

Colin McGinn 

 

 

 

Share

Philosophy and Form

 

 

Philosophy and Form

 

 

Philosophy is versatile as to form. We have the long-form book or monograph, the medium length article, the note or comment, and the epigram. There is also the dialogue form (Plato, Hume, Berkeley). Philosophy can be written as poetry (Lucretius, Eliot, Donne) or performed in a play (Shaw, Beckett, Stoppard). There are philosophical novels (Sartre, Camus, Murdoch). There are films with philosophical themes (The Matrix, Inception, Woody Allen). There is philosophical painting (Magritte, de Chirico, Escher). Music, architecture, and dance can have philosophical themes.  There is even philosophical comedy (Monty Python, Beyond the Fringe). I can’t think of another subject that is represented in so many different forms. Physics isn’t, despite its prestige and importance; nor is history or economics. Philosophical ideas appear in great expressive variety, whereas the ideas of other disciplines are more expressively limited. Why is that?

            It’s not because philosophy is easier or more accessible. It’s not because it has greater practical impact. It’s not because there are more philosophers than other types of savant. I think it’s because everyone is a philosopher and philosophy gets into everything. We just can’t help it: philosophy is in our blood. Not everyone studies it formally, but philosophical thoughts exist in everyone. They keep cropping up, inescapably. So they get expressed in many forms. Why it is that we are so philosophical a species is an interesting question: it is hardly practically pressing or even much fun. Couldn’t there be a species very like us but without a philosophical bone in their body? But with us it is natural and ubiquitous, so it spills out all over the place.

            In addition philosophy lends itself to a multiplicity of forms: it is essentially malleable. It is not that when someone has a philosophical thought the form of its expression is immediately evident—it might lead to a poem or a painting or a story or discursive prose. This is partly because philosophy is an emotional subject. It’s hard to see how a thought in physics could be so open-ended in its form of expression. Conversation is a natural mode of philosophical expression, which is why the dialogue form is appropriate; but the same is not true of other disciplines. Conversations express our concerns and reveal our uncertainties—they are a human activity. Philosophy is a humanistic discipline in that sense, an expression of our human nature. Hence it takes the form of human modes of expression. Perhaps, too, there is a need to express oneself philosophically—in conversation, writing, art, and so on. The variety of forms reflects the desire to find expression for one’s philosophical self. One doesn’t express one’s “physical self” in acts of expressive physics—one simply aims gets the point across in equations and verbal formulations. But philosophical expression is far more a form of self-expression in its deep origins: this is why we speak of “my philosophy” (but not “my physics” or “my economics”). Philosophy is personal. And it seeks expression where it can.

            This suggests that the current style of academic philosophy is distorting and limiting. Academic philosophy today is almost exclusively confined to a small number of forms—chiefly the article and the book. These are written in a certain professional style (which I need not characterize, but not exactly scintillating). But philosophy itself, as a human passion or obsession, is not inherently tied to this type of form. Institutional norms have determined the dominant philosophical forms today, not the living essence of the subject. I am not suggesting that we abandon well-reasoned philosophical prose in favor of poetry and prancing around, but I do think we should recognize the variety of forms that are natural to philosophy. We shouldn’t suppose that the way we write philosophy within the academy today is the only acceptable way to do it. That diminishes our ability to appeal to a wider audience and doesn’t do justice to what philosophy deeply is. In trying to compete with other disciplines within the academy on their terms, we have forgotten that philosophy can be clothed in many forms.

 

Coli

Share

Knowledge By Necessity

                                               

 

 

 

Knowledge By Necessity

 

 

We can know that a proposition is true and we can know that a proposition is necessary, but can we know that a proposition is true by knowing that it is necessary? Consider a simple tautology like “Hesperus = Hesperus”: don’t you know this is true by seeing that it is necessary? If someone asks you why you think it is true, you will answer, “It couldn’t be otherwise, so it has to be true” or words to that effect. The sentence is clearly necessary, so you can infer that it must be true. You treat the modal proposition as a premise to derive the non-modal proposition. The former proposition acts as the ground of your knowledge of the latter proposition. You can tell just from the form of the proposition that it must be true, and thus it is true. You derive an “is” from a “must”. You really can’t help seeing that the sentence expresses a necessity, given that you grasp its meaning, and truth trivially follows. We can call this “necessity-based” knowledge: knowledge that results from, or is bound up with, modal knowledge. How else could you know the proposition to be true—not by empirical observation, surely? You know it by analysis of meaning: the meaning is such as to make the sentence necessary. The sentence has to be true in all possible worlds, given its meaning, and so it is true in the actual world—truth is a consequence of necessity. It is immediately obvious to you that the sentence is necessary—and so it must also be true. If someone couldn’t see that “Hesperus = Hesperus” is necessary, you would wonder whether he had understood it right. Maybe someone could fail to see that necessity entails truth and hence not draw the inference; but how could he fail to see that “Hesperus = Hesperus” is a trivial tautology, in contrast to “Hesperus = Phosphorous”? The sentence is self-evidently a necessary truth.

            It thus appears that some knowledge of truth is necessity-based: knowledge of the truth involves knowledge of the necessity, with the latter acting as a premise. Sometimes people believe things to be true because they perceive them to be necessary. You know very well that Hesperus is necessarily identical to Hesperus—how could anybody not?—and so you are entitled to believe that “Hesperus is Hesperus” is true. For analytic truth generally the same epistemic situation obtains: you can see the sentence has to be true given what it means, so it follows that it is true. Even if the move from necessity to truth is not valid in every case (e.g. ethical sentences), it is in some cases. We can thus derive non-modal knowledge from modal knowledge. But clearly not all knowledge is like this—mostly you can’t come to know truths by perceiving necessities. You can’t come to know the truth of “Hesperus = Phosphorus” that way: here you have to investigate the empirical world. The sentence is necessary, but you can’t use this necessity to decide that the sentence is true. You may know that if it is true then it is a necessary truth, but you don’t know that it is true just by understanding it, so you can’t use its necessity as a premise in arguing that it is true. You need to appeal to observation to show that the sentence is true—as you do for any other empirical proposition. Here your knowledge is observation-based not necessity-based—observable facts about planetary motions not the analysis of meaning. You won’t cite tautology as a reason for truth in this case, but you will in the other case. You won’t argue that there is no alternative to being true for “Hesperus = Phosphorus”. Clearly you can’t argue that “Hesperus = Phosphorus” follows from “Possibly Hesperus = Phosphorus”, but that is the only modal truth you have at your disposal in your current state of knowledge, unlike the case of “Hesperus is Hesperus”. So you can’t take a short cut to knowledge of truth by relying on an evident necessity—you have to resort to arduous empirical investigation. You may wish you knew that the sentence is necessary, so as to spare yourself the epistemic effort, but that is precisely the knowledge you lack in this case, since the expressed proposition in question refuses to disclose this fact. We resort to observation when our modal sense cannot detect necessity, which is most of the time. Necessity-based knowledge is quick and easy, unlike the other kind.

            I have been leading up to the following thesis: a priori knowledge is knowledge by necessity while a posteriori knowledge is not knowledge by necessity. Here we define the a priori positively and the a posteriori negatively, unlike the traditional definition in terms of knowledge by experience versus knowledge not by experience. This gives us a result for which we have pined: a positive account of the nature of a priori knowledge. The two definitions map onto each other in an obvious way: knowledge by necessity is not knowledge by experience, and knowledge by experience is not knowledge by necessity. That is, we don’t come to know necessities by experiencing them, and necessities are no use to us in the acquisition of empirical knowledge. Necessity plays a role in acquiring a priori knowledge, but it plays no role in acquiring a posteriori knowledge. To have a crisp formulation, I shall say that a priori knowledge is “by necessity” and a posteriori knowledge is “by causality”—assuming a broadly causal account of perception and empirical knowledge. We can also say that a priori knowledge is knowledge grounded in our modal faculty, while a posteriori knowledge is knowledge grounded in perception and inference—thus comparing different epistemic faculties. But I think it is illuminating to keep the simpler formulation in mind, because it directs our attention to the metaphysics of the matter: modality in one case and causality in the other. The world causally impinges on us and we thereby form knowledge of it, and it also presents us with necessities that don’t act as causes—thus we obtain two very different kinds of knowledge. The mechanismis quite different in the two cases—the process, the structure.

            Is the thesis true? This is a big question and I shall have to be brief and dogmatic. There are two sorts of case to consider: a priori necessities and a priori contingencies. I started with an example of a simple tautology because here the necessity is inescapable—you can’t help recognizing it. Hence knowledge of necessity is guaranteed, part of elementary linguistic understanding. But not all a priori knowledge is like that, though tautology has some claim to be a paradigm of the a priori. What about arithmetical knowledge? If it is synthetic a priori, then we can’t say that knowledge of mathematical necessity results from linguistic analysis alone. Nevertheless, it is plausible that we do appreciate that all mathematical truths are necessary; we know that this is how mathematical reality is generally. When we come to know that a mathematical proposition is true we thereby grasp its necessity: a proof demonstrates this necessity. Mathematics is arguably more about necessity than about truth: we can doubt that mathematical sentences express truths (we might be mathematical fictionalists), but we don’t doubt that mathematics cannot be otherwise—it has some sort of inevitability. We might decide that mathematical sentences have only assertion conditions, never truth conditions, but we won’t abandon the idea that some sort of necessity clings to them (though we may be deflationary about that necessity). Modal intuition suffuses our understanding of mathematics, and this can function in the production of mathematical knowledge. I see that 3 plus 5 has to be 8, so I accept that 3 plus 5 is 8. Mathematical facts are inescapable, fixed for all time, so mathematical truths are bound to be true: I appreciate the necessity, so I accept the truth. The epistemology of mathematics is essentially modal and this plays a role in the formation of mathematical beliefs: in knowing necessities we know truths—and that is the mark of the a priori.  [1]

            Much the same can be said of logic, narrowly or widely construed. You cannot fail to register the necessity of a logical law, and you believe the law because you grasp its necessity. Nothing could be both F and not-F, and so nothing is. The necessity stares you in the face, as clear as daylight, and because of this you come to know the law—the knowledge is by necessity. Accordingly, it is a priori. It isn’t that you can believe in the truth of the law and remain agnostic about its modal status (“I believe that nothing is both F and not-F, but I’ve never thought about whether this is necessary or contingent”). Your belief in the law is bound up with your belief in its necessity; thus logical knowledge is a priori according to the proposed definition. The same goes for such propositions as a priori truths about colors: “Red is closer to orange than blue”, “There is no transparent white”, etc. Here again the necessity is what stands out: we know these propositions to be true because we perceive their necessity—not because we have conducted an empirical investigation of colors. Accordingly, they are a priori. In all the cases of the a priori in which the proposition is necessary this necessity plays an epistemic role in accepting the proposition; it is not something that lies outside of the epistemic process. It is not something that is irrelevant to why we accept the proposition. We recognize the necessity and that recognition is what leads us to accept the proposition. If we accepted the proposition for other reasons (testimony, overall fit with empirical science), then our knowledge would be a posteriori; but granted that our acceptance is necessity-based the knowledge is a priori. Being known a priori is being known by necessity: the involvement of modal judgment is what defines the category.  [2] By contrast, a posteriori knowledge does not involve modal judgment—you could achieve it and have no modal faculty at all. The basis of your knowledge is not any kind of modal insight, but observation and inference (induction, hypothesis formation, inference to the best explanation). You don’t have modal reasons for believing that the earth revolves around the sun, but you do have modal reasons for believing that red is closer to orange than blue—viz. that it couldn’t be otherwise. Since things couldn’t be otherwise, they must be as stated, and so what is stated must be true. The modal reasoning is not a mere add-on to knowledge of a priori truths but integral to it.

            It may be thought that the contingent a priori will scuttle the necessity theory, since the proposition known is not even a necessary truth; but actually it is not difficult to accommodate these cases with a little ingenuity. One line we could take is just to deny the contingent a priori, and good grounds can be given for that; but it is more illuminating to see how we could extend the necessity theory to cover such cases. Three examples may be given: reference fixing, the Cogito, and certain indexical statements. What we need to know is whether there are necessities that figure as premises in these cases, even if these necessities are not identical to the conclusion drawn from them. Thus in the case of fixing the reference of a name by means of a description (e.g. the meter rod case) we can say the following: “No matter what the length of this rod may be the name ‘one meter’ will designate it”. If I fix the reference of a name “a” by “the F”, then no matter which object is denoted by that description it will be named “a”. This doesn’t imply that the object named is necessarily F; it says merely that the name I introduce is necessarily tied in its reference to the description I link it to. Because we recognize this necessity we can infer that a is the F(no matter who or what the F is). We don’t need to undertake any empirical investigation to know that a is the Fsince it follows merely from the act of linguistic stipulation—and that act embodies a necessary truth (“the person designated by ‘a’ is necessarily the person designated by ‘the F’”).

In the case of the Cogito it is true that the conclusion is not a necessary truth (since I don’t necessarily exist), but there is a necessary truth lurking behind this proposition, namely “Necessarily anyone who thinks exists”. It is a necessary truth that thinking implies existence (according to the Cogito), but it is not a necessary truth that the individual thinker exists—he might not have existed. I know that I exist because I know that I think and I know that anything that thinks necessarily exists. Thus I use a modal premise to infer a non-modal conclusion: from “Necessarily anything that thinks exists” to “I exist”. That is my ground for believing in my existence, according to the Cogito, and it is a necessary truth. Thus the knowledge derived is a priori, according to the definition. I don’t make empirical observations of myself to determine whether I exist; I rely on a necessary truth about thought and existence, namely that you can’t think without existing. I know that I exist (contingent truth) based on the premise that anything that thinks exists (necessary truth), so my knowledge essentially involves the recognition of a necessity.

Thirdly, we have “I am here now”: this expresses a contingent truth whenever uttered but is generally held to be a priori. I know a priori that I am here now, but it is contingent that I am here now. But again there is a necessary truth in the offing, namely: “Anyone who utters the words ‘I am here now’ says something true”. By knowing this necessary truth I know that I must be speaking the truth when I utter those words, but my utterance expresses a contingent truth. So I rely on a necessary truth to ground my belief in a contingent truth. Without that necessary truth I would not know what I know, i.e. that my current utterance of “I am here now” is true. Again, the case comes out as a priori according to the definition; we just have to recognize that the modal premise need not coincide with the conclusion. We can have a priori knowledge of a contingent truth by inferring it from a distinct necessary truth. So we have found no counterexamples to the thesis that all a priori knowledge is knowledge by necessity.

            I have assumed so far that the type of necessity at issue is metaphysical necessity, not epistemic necessity. This is the kind of necessity we recognize when we come to know something a priori. But we could formulate the main claims of this essay using the concept of epistemic necessity. For simplicity, just think of this as certainty, construed as a normative not a psychological concept—not what people are actually certain of but what they ought to be certain of. Then we could say that when I am presented with a tautology I recognize that it is certain and infer from this that it must be true, and similarly for other cases of a priori knowledge. This approach converges with the account based on metaphysical necessity, because certainty and necessity correlate (more or less) in cases of a priori truth. But I prefer the metaphysical formulation because it connects an epistemic notion with a metaphysical notion—a priori knowledge with objective necessity. When I know something a priori I know it by recognizing the objective trait of necessity not a psychological trait of certainty (however normatively grounded). Thus the epistemological distinction has a metaphysical correlate or counterpart. To know something a priori is to know it by detecting an objective fact of necessity, though we may also be certain of what we thereby know. In contrast, to know something a posteriori is not to know it by necessity detection but by perception and inference (by causality). This is a deep and sharp distinction, and it at no point relies on a purely negative characterization of what we are trying to define. We really do know things in two radically different ways: by apprehending necessity or by registering causality.            

 

 

  [1] Perhaps part of the attraction of the view that mathematics consists of tautologies is that it comports with the idea that our knowledge of mathematics involves knowledge of necessities. The necessities occupy the epistemic foreground.

  [2] Given this account of a priori knowledge, it is doubtful that animals have it, because they lack modal sensitivity—they don’t perceive that propositions are necessary. If you present an animal with a tautology, it will stare at you blankly. They may have innate knowledge, but they don’t have a priori knowledge. Not even the most intelligent ape has ever thought that water is necessarily H2O or that the origin of an ape is an essential property of it. Animals have no knowledge of metaphysical necessity. This explains their lack of a priori knowledge.

Share

The Utra-Selfish Gene

                                               

 

 

 

The Ultra-Selfish Gene

 

 

In David Attenborough’s nature documentary Frozen Planet there is some remarkable and rare footage of polar bears mating. The male begins a twenty-mile trek through deep snow lured by the scent of a distant female. He catches up with her and engages in courting behavior, which is not guaranteed to have a positive outcome. He meets with success, however, and there is some rather touching footage of the act of intercourse, which both seem to enjoy. Does the male then peel off and return to his solitary ways, confident that he has done his reproductive job? No, he continues to accompany the female in order to fend off potential rivals intent on impregnating her. Rivals indeed duly appear, determined individuals by the look of them, and there is distressing footage of bloody and prolonged fighting between the males. The original male succeeds in repelling the suitors, but he is wounded and exhausted from the effort. After a few days he deems it proper to leave the female in the belief that his sperm will not by displaced by anyone else’s. The two bears part company in a way that doesn’t seem particularly wistful and we learn from Attenborough that they will not meet again, the cubs to be raised by the mother. He remarks that the male is probably relieved to have the whole thing over with so that he can return to his peaceful solitary life. He ambles off into the sunset, bloody and worn out, but with mission accomplished.

            The question is why the male is prepared to go to so much trouble and take such risks. He could easily have been killed in one of the fights and might yet die from the wounds already inflicted. It can’t be because of the satisfaction he knows he will derive from his offspring or the prospect of future copulations with the female, since none of that will happen. Can you imagine any human male who would behave in such a selfless manner? First you copulate with a female and then you wait around to engage in possibly fatal fights with a series of nasty new suitors? Surely a male human would depart the scene long before having to face such rivals, even if that meant his sperm might be displaced by a fresh batch. It seems remarkably contrary to the bear’s individual self-interest: what does he get out of this? Don’t say he gets offspring—that is not a point about his desires and interests. From his selfish point of view it would be better to cut and run—his life would go better without all the waiting and fighting. So why does he do it? The answer, of course, lies in the genes: the genes program him to act in this way—to ignore his own best interests and engage in acts of self-sacrifice. They program him to act unselfishly, even to the point of potential suicide (presumably many bears do die in such fights).  [1] They do this because their sole concern is to make it into the next generation—their survival is at stake. It doesn’t matter if the animal that carries them dies in the process, so long as they get passed on. They are concerned about their own survival not the survival of their bodily vehicle. They would program an animal to kill itself if that achieved their need for immortality: that is, genes for suicide would survive better than genes for self-preservation if the former method led to more effective gene transmission.  [2] Their interests do not coincide with the interests of the animal that harbors them, though there may be overlap. 

            Thus I wish to say that the genes are ultra-selfish. They never program their host animal in a way that respects the interests of that animal. They don’t have an altruistic bone in their body. Sometimes people run away with the idea that the selfish gene is a gene for selfishness—genes act to make animals selfish. But this is a complete misunderstanding of the theory: it is the genes that are selfish, not the animal that contains them, and they can make the animal act in ways that violate its own self-interest, as with the persistent polar bear. Someone might reply that the genes can’t be completely selfish because they allow for the unselfish behavior of parenting and kin altruism. But the genes are not acting to benefit animals other than the one in which they reside; they are ensuring that their own survival is maximized—since they also reside in the bodies of genetically related animals. They program their carrier to help others for the same reason they program the bear to fight off rivals—to maximize their chances of surviving into the next generation. Whether any individual animal benefits is beside the point, at best a by-product of their selfish action.

            But surely the genes program animals to act in their own best interests most of the time—to be generally selfish. Don’t they implant a selfishness gene in the host animal? The reason for this is that the animal must survive if they are to survive, so the interests of the two coincide. Isn’t an animal a “survival machine” in the sense that its prime directive is individual survival? But if we look more closely even this is a distortion of the underlying truth. The animal isn’t aiming to survive but to reproduce—the former is just a means to the latter. Survival matters to the genes only because reproduction does, since that is what enables their immortality. The animal is less a survival machine than a sex machine (with apologies to James Brown): it is a machine for ensuring that reproduction occurs. If it were logically possible for an animal to reproduce without surviving to that point, that is how things would work (posthumous coition). Once reproductive life ends the animal is no use to the genes (except for extended family duties). From this perspective the selfishness of the genes should be apparent: they build and program an animal that will be an effective reproducer (gene transmitter), not one that will take its own survival and satisfaction as primary. They will make a body and mind geared to reproduction whether that suits the animal or not. This is why there are no contented and long-lived animals around that don’t reproduce—their genes don’t get passed on. Such an animal would be a genuinely selfish individual, caring only for itself and its own interests. But the animals that actually exist are not ideally selfish; in fact, they are slaves to the genes. The genes act always to serve their own interests, never the interests of their host—or any other animal. They are ultra-selfish.

            It might look as if the polar bear has an altruistic concern for the interests of future unborn generations, since he sticks around to make sure that his offspring will come to exist. But of course he has no such thoughts; and anyway they are dubiously coherent, since no such individuals exist at the time of the bear’s protective actions. The genes exist and are passed on (copies of them), but this has nothing to do with concerns about future generations and their happiness. The genes simply program the animal to blindly follow the directive of maximizing their presence in future animals. The animal will act unselfishly in order to obey this directive, even to the point of self-destruction. The genes program the animal to be unselfish because of their ultra-selfishness. So we must rid ourselves of the idea that the basic rule of life, seen from a gene’s point of view, is the production of selfish organisms: selfish genes are not genes for selfishness. Whether an animal is selfish or unselfish is neither here nor there; it all depends on what strategy best enables the genes to survive. Unselfish organisms are a good way in certain circumstances to further the interests of the ultra-selfish genes. If the selfish genes could achieve their desired immortality by building organisms that are entirely unselfish, they would; as it is they make them partially unselfish. Unselfish organisms are certainly what the genes need in certain situations—like the fighting polar bear. And the same is true for kin altruism, as well as for the basic design (physical and mental) of the organism. Reproduction is costly and dangerous in the state of nature; it isn’t what a determined egoist or hedonist would recommend. The genes make reproduction worth our while to some degree, but it isn’t the most prudent and self-serving of possible types of life. Animals are driven by their instincts (genes) in this direction, rather than deciding upon it as the most satisfying way to live (of course, it is possible to detach sex from reproduction). Selfish genes don’t make selfish organisms as a matter of course, and conceptually these are entirely separate matters. To repeat: selfish genes are not genes for selfishness. I would even say that, at a deep level, animals never act selfishly, precisely because they are controlled by ultra-selfish genes. They never put their interests above the interests of others, in

  [1] Then there is the question of the motivation of the mother: pregnancy and childrearing are not in her interests either, but they are in the interests of her genes. Motherhood has some claim to be the most diabolical invention of the genes—like carrying around a bomb. Motherhood has killed many a mother.

  [2] It’s hard to imagine how this could be so given the facts of biological life, but the conceptual point still holds: anything that enhances gene transmission will be selected for, no matter how unselfish it may be from the animal’s perspective. The wellbeing and survival of the individual generally lead to gene transmission, but this is not a conceptual necessity, more like a lucky accident.

Share

Philosophical Originality

 

 

Philosophical Originality

 

 

What produces philosophical originality? One answer is genius: from time to time a genius crops up and from his or her fertile brain originality flows. Then we have a golden age. No doubt the greatest philosophers were geniuses, so it is natural to suppose that this is what brings originality about. The trouble with this answer is that originality is too sporadic for this explanation to be plausible: geniuses will crop up in the population at a constant rate (assuming a genetic basis), but philosophical originality does not historically occur in this way—it comes in waves separated by arbitrary intervals of time. The best way to answer our question is to survey the history of philosophy (Western philosophy) and try to discern patterns and possible causes. Are there historical conditions that conduce to bursts of creativity?

            There are two possible types of explanation: internal to philosophy and external to it. Internal explanations say that the causes are internal to the subject of philosophy; external explanations say that the causes are external to the subject of philosophy. Thus either something about philosophy itself leads to innovation or something outside it does—or possibly both. I have come to the conclusion that the causes are principally external, and indeed that one type of cause is typical (which is not to say necessary). Obviously these are large historical and psychological questions, inherently difficult to assess, but a broad picture seems to emerge when we examine the history of the subject. Not to keep the reader in suspense, it appears that the prime cause of original thought in philosophy has been advances in mathematics. (I will restrict myself here to the parts of philosophy that don’t include ethics, aesthetics, and political philosophy—metaphysics, epistemology, logic, and related fields.)

Plato must be counted as a great original, and it is well known that he was much influenced by Pythagoras and his school. Greek geometry, later assembled by Euclid, formed the intellectual environment in which Plato forged his philosophical ideas. Thus we have the idea of a changeless perfect world of forms to be contrasted with what the senses reveal, where truths about this world can be established by rigorous proof. Geometry can be described as the mathematics of space, so it was the mathematical treatment of space that acted as a trigger to Plato’s originality. The objects of geometry supply the ontology and the method of proof supplies the epistemology—this is what a serious subject looks like. Aristotle continues in the same vein (substance and form) but reacts against it to some degree: he is less mesmerized by mathematics than Plato—but it forms the background to his thought. An intellectual stimulus can have either a mimetic or an antipathetic response. One can be creatively against something. Aristotle was against Plato’s excessively mathematical outlook and shaped his philosophy accordingly.

            There then followed a rather unoriginal period—the Middle Ages. During this time nothing comparable to Greek geometry occurred in mathematics and philosophy took no major steps forward (I am speaking broadly). Then we reach the Renaissance in which there was a great flowering: Descartes, Leibniz, Locke, Berkeley, Hume, and others. What happened? Physics is what happened—mathematical physics (Newton’s book is entitled Principia Mathematica). Calculus was invented and the mathematics of motion formulated. The physical world was conceived quantitatively, with mass, force, and motion mathematically measured. This new paradigm of knowledge led to a reinvigoration of philosophy—with adherents and dissenters  (notably Berkeley). It provides a framework for metaphysics (matter in motion) and an epistemology (observation and calculation), as well as a model of what a real science should look like. The question of materialism took on new life now that physics was in the ascendant. Thus a good deal of original philosophy was stimulated by the new mathematical physics—not from the insights of philosophers working on their own internal problems (worthy as that may be). The agenda was set, the map laid out—by a development in mathematics. Just as the major influence on Plato was a non-philosopher (Pythagoras), so the major influence on the philosophers of the Renaissance was a non-philosopher (Newton—also Descartes in his capacity as physicist and mathematician).

            Again, there followed a relatively static period in philosophy (though stirred somewhat by Darwin  [1]) until the dawn of the twentieth century. Then we have the spectacular rise of mathematical logic—the application of mathematics to logical reasoning. Frege, Russell, and Wittgenstein were philosopher-mathematicians impressed by the power of symbolic logic, with its formulas, proofs, and theorems. Russell and Whitehead’s Principia Mathematica was a mathematical treatise on the subject of valid reasoning (among other things), and it formed the shiny new object onto which philosophers could latch. Some saw it as the bright future of philosophy, others as its death knell. Again there is adherence and reaction: analytical philosophy versus continental philosophy (roughly), or the Tractatus and the Investigations. Mathematical logic played the historical role previously played by geometry and mathematical physics—a model and inspiration, or a threat to all that is holy. It was not the achievement of a professional philosopher qua philosopher that caused this ferment, but the achievement of mathematicians; the trigger was external to philosophy.

This stimulus received a boost later in the century, particularly from Turing, with the idea of a formal computation. This idea led not only to the computer but also to developments such as cybernetics, automata theory, and mathematical information theory. A new branch of mathematics supplied new tools with which to think about the mind and knowledge. The doctrine known as “functionalism” arose from these developments—a kind of mathematical theory of the mind (mental processes as functions from inputs to outputs, formally implemented). We are still living with Turing’s contribution in today’s cognitive science (including linguistics). And once again, there are followers and rebels—some who think we now have the key to understanding the mind, others who think the mind is quite other. It is the mathematical conception that sets the agenda and captures the imagination.  [2]Philosophy responded to computation theory as it did to the rise of mathematical logic. Nothing else has had this kind of impact on the field—not chemistry, biology, psychology, history, or whatever. Philosophy seems uniquely susceptible to the charms of mathematics. Not its slave, to be sure, but its keen observer, its ardent pupil–or its stern critic. You either love mathematical philosophy or you hate it.

            So now we have an interesting question: what is the next wave of mathematics that will drive the agenda of philosophy, shaking it up, reshaping the subject? We have had the mathematics of space, of motion, of logical reasoning, and of computation—what will it be next? I don’t think anything that now exists in mathematics can play the role played by these earlier innovations, so we need something new to get the ball rolling (whether we can achieve it or not). I suggest that what we need is a new mathematical theory of mind, especially of consciousness: we need a mathematical theory that does for the conscious mind what earlier mathematical theories did for space, motion, logical reasoning, and computation. I have no idea what such a theory might look like; my point is just that it would be likely to trigger a new wave of philosophical originality—perhaps greater than any seen heretofore. Think about it: a mathematical treatment of what lies at the center of human existence and human knowledge—what connects us to the world and to each other. Surely that would be an impressive body of mathematical thought with enormous implications. How would philosophy respond to it? What would it do to traditional philosophical problems? It would change the contours of the subject. Maybe we will have to wait a long time for a mathematical theory of consciousness to be constructed (look how long it took for the previous developments to come about), in which case we won’t see the degree of originality in philosophy that we saw in the earlier periods any time soon. Of course, I am speculating wildly and claim nothing more—it is an interesting idea to think about. There does seem to be an historical pattern here and a mathematical theory of consciousness would surely set the cat among the pigeons. It would set a standard of intelligibility and precision that isn’t even dreamed of today—a psychological Principia. The properties of consciousness would be as clear and exact as geometrical forms, motion through space, logical reasoning, and formal computation.

Mathematics crystallizes things, converts them into rigorous abstract patterns, and analyzes their structure, thus rendering them transparent to the intellect. This is why mathematical innovation impresses philosophers so much—it represents a distant ideal seldom if ever achieved in philosophy itself. We dearly wish that philosophy could achieve such clarity and precision—or we fear (some of us) that it would remove the charm of philosophical obscurity. Mathematics is like philosophy’s successful elder sibling, an inspiration and a rebuke. The affinity between mathematics and philosophy has often been remarked; it is no surprise, then, if philosophers keep a watchful eye on mathematics. When Spinoza wrote his Ethics in the style of Euclid’s Elements he was acknowledging the force of Euclid’s example. Empirical science can never exercise this kind of hold on the philosophical imagination because it is too caught up in the passing concrete empirical world; mathematics by contrast shares the abstract necessity of philosophy. Mathematics provides the kind of vision of things that philosophers (many of them) resonate to– so they are apt derive inspiration from it. Philosophers are mathematicians manqué.  [3]

 

Colin

  [1] Darwin’s theory has a mathematical aspect: a random process leads to the selection of organisms or traits that increase in frequency in a population. It is abstract and quantitative, a kind of algorithm; also statistical.

  [2] I should mention Godel’s results here—also mathematics with a large philosophical impact.

  [3] Philosophers who model their subject more on literature or history (such as Collingwood) recoil from mathematical philosophy; they cannot, then, use mathematics as a source of new ideas. Their philosophical tradition will be independent of mathematical innovations. But they are in the minority.

Share

Combining Concepts

                                               

 

 

Combining Concepts

 

 

We possess concepts and we combine them into thoughts. Those are deceptively easy words to say. What is this “possessing” of concepts? Somehow concepts are stored in the mind, unconsciously, but not in the form of use or mention: we are not using all of our concepts at any given moment, and we don’t store them by mentioning them in mental quotation marks (which itself would involve using a meta-conceptual concept analogous to using a quotation name of a word). Possessing a concept is not like possessing fingers or frontal lobes: concepts are not possessed in the way bodily parts are. They are more like memories (though not exactly so), but the nature of memory is puzzling too: how do memories exist in the mind? But the question I want to probe here concerns not possession but combination: What is involved in combining concepts?

            We don’t even have any bad theories to refute and ridicule. You might point to other uses of “combine” and compare the case of concepts to them. We combine ingredients to make a cake: but that operation is nothing like combining concepts to make a thought—it is not performed with the hands and there is no mixing. What about combining words into a sentence? Here we must tread carefully. If we just mean uttering words in temporal succession, then we know what that is, but it clearly isn’t what happens when we think by combining concepts. There is no uttering and we don’t just string concepts into a temporal sequence—they have to be properly combined to form something meaningful. If we mean combining words in the language of thought, then we have a special case of the problem: what is this combining? The pieces have to fit together, constitute a whole, and produce a proposition: how does the mind achieve this—by what process or mechanism? How, for example, are simple mental acts of predication generated? An individual concept is somehow hoisted into consciousness at the same time as a general concept and the two are somehow brought into juxtaposition. But what is this juxtaposition? It can’t be just that they exist side by side, spatially or temporally; they have to be combined. What is the mental glue? What is the mode of connection? A whole is assembled from parts, but what kind of assembly is it?

            We can imagine dualist or materialist theories of conceptual combination. The dualist theory is apt to be mainly negative: conceptual combination is not any kind of physical combination. It is not the joining together of extended things into a more extended thing, like pieces of Lego. Rather, the immaterial mind enables concepts to link up in a quasi-magical way, as only an immaterial mind can. The trouble with this is that it is not an explanation; and surely we don’t want the puzzle of conceptual combination to require dualism for its solution. The materialist view will maintain that combinations in the brain underlie conceptual combination—as it might be, the co-excitation of distinct neural networks. No doubt there exist underlying physical complexes in the brain, but it is hard to see how they could constitute and explain the combination of concepts. They exist at the wrong level of analysis; we should be able to say something about concepts as such that articulates what is involved in their combination. What is it about a concept that enables it to slide so smoothly into a linkage with another concept? What properties does it have that explain its combinatorial powers? There are theories about the referential powers of concepts (such as causal theories), but what theories are there about the power of concepts to hook up with each other? Concepts can combine with certain other concepts but not with others: what is the difference? You can combine the concept John with the concept house to get the concept John’s house, but you can’t combine John with Mary to get John Mary or house with planet to get house planet. Concepts can accept or reject potential partner concepts depending on their inner nature. They can repel or attract other concepts.

            We might now try to take a leaf out of Frege’s book: he said that some concepts are saturated and some are unsaturated.  [1] An unsaturated concept contains a space for a saturated concept, thus saturating it. This is no doubt an obscure doctrine, though not without some intuitive pull, but the question is how to apply it to psychological processes. Concepts (senses), for Frege, are abstract non-psychological entities, so his notion of saturation applies at that level: but how does it apply at the level of concepts in the psychological sense (“ideas” in Frege’s terminology)? In what sense is a psychological entity like my concept house “unsaturated”? This seems like metaphor or mumbo-jumbo (choose your poison).

We don’t experience the mode of joining that concepts undergo or engineer, so we can’t observe how the combination works. It is this secret joining that allows for the famed productivity and infinity of possible thoughts (and meaningful sentences), but it is quite opaque to introspection or any other mode of observation. If concepts didn’t combine, thought would be impossible, even quite limited thought. If concepts lost their ability to combine, through some sort of brain ailment, thought would stop dead in its tracks. The glue is at least as important as the items glued. But the glue doesn’t reveal itself—it is hardly as if concepts have sticky ends! Even metaphors are thin on the ground here; no possible theory suggests itself. One’s feeling is that joined concepts are a bit like people holding hands—there is a part that is designed as a gripping or hooking device. But this is absurd fantasy or pointless poetry not the beginnings of a theory. Alternatively, one speaks of synthesis: in conceptual combination a synthesis of concepts is formed. That sounds right enough, but again it is hardly a theory, more like a reformulation of the problem. For what is it to synthesize concepts? Complex concepts have parts that are brought together, but how are they brought together? What is this “bringing together”?

We know what combining physical objects is—spatial aggregation—but what is combining the units of thought? In Frege’s terms, what is the combination of senses (now construed psychologically)? Senses look outward to references, but they also look sideways to other senses—those that they can join with. It is written into a sense what it refers to, but it must also be written into it what it can combine with—with this but not with that. And it must be possible for senses to lock together into complex senses for the duration of a thought and then dissolve apart when the thought is over. Some operation splices one sense or concept to another, but then separation reasserts itself. There is a concept-combining device that moves concepts from where they are stored in the mind and forms strings of them displaying internal unity, and then disassembles these strings into their dormant isolated constituents. They are not combined in their stored form, being isolated units, though they are quick to enter into combinations; the combinatorial device imposes on them a kind of brief marriage with other concepts, quickly leading to divorce. Concepts thus flow in and out of combinations with other concepts; the puzzle is how they get cemented together for the duration. What is the composition of the conceptual glue? How do concepts find each other?

            Let me try to make the problem vivid by adapting Brentano. He introduced the idea of intentionality as a (non-physical) relation between a mental entity and something that exists outside of it but which is somehow its object: the mental entity is “directed towards” the object, intrinsically connected to it. The relation is somewhat obscure but it seems real enough—thoughts are obviously about things. Let’s introduce the idea of concept-to-concept intentionality, whereby a concept “refers” to any concept with which it can combine. It is written into a concept what kinds of combination it accepts and what it rejects. Furthermore, when a concept is acting as part of a combination it has this kind of horizontal intentionality vis-à-vis the concepts combined with it. There is a relation R such that the concept has R to the concepts combined with it. The concept thus both points outward beyond concepts and also inward to other concepts: it is combinatorial as well as referential. It has a kind of double intentionality. And it needs both aspects if it is to do its job as a constituent of thought: it needs two sorts of relation—inter-conceptual and extra-conceptual. Both are admittedly puzzling and evidently sui generis, but I find the inter-conceptual relation even more elusive and perplexing than the extra-conceptual. In virtue of what do concepts combine? Hume spoke of causation as the “cement of the universe” and found it puzzling; concepts have their “cement of the mind” and it too is puzzling. We don’t even have inadequate theories of it. Indeed, it is far from easy to make the problem visible.

 

  [1] Strictly, for Frege some concepts (“senses”) stand for saturated entities (“objects”) and some stand for unsaturated entities (“concepts” in Frege’s technical sense): but I am not concerned with the details of Fregean exegesis here.

Share