Philosophy As Biology

                                   

 

 

Philosophy as Biology

 

 

In the 1960s linguistics took a biological turn with the work of Lenneberg and Chomsky.  [1] Language was held to be genetically fixed, a species universal, just like the anatomy of the body. It is a biological aspect of human beings, not something cultural or learned, more like digestion than chess. Language evolved, became encoded in the genes, and is present in the brain at birth. Since linguistics is properly viewed as a branch of psychology, according to these theorists, this means that part of psychology is also biological, not something separate from and additional to biology. But then it is reasonable to ask whether more of psychology might fall under biological categories; and succeeding years saw psychology as a whole taking a biological turn. Many of our mental faculties turn out to have biological origins and forms of realization in the organism. Indeed, learning itself must be genetically based and qualifies as a biological phenomenon: what an animal learns is part of its biological nature, not something set apart from biology. True, what is learned is not innate, but many things are not innate that are part of the natural life of the organism (e.g. a bee’s knowledge of the whereabouts of nectar). Dying by predation is not innate but it is certainly biological. Biology is the science of living things, and living things learn as part of their natural way of life. In any case, psychology turned from cultural conditioning to biological naturalism; it became evolutionary. How could it not given that minds evolved along with bodies? The mind of an organism is part of its nature as a living thing; it doesn’t exist outside the sphere of biology (as the soul was supposed to). The organism is a psychophysical package.

            The basic architecture of language is thus a biological architecture. Syntax is an organic structure; the lexicon is a biological system too. When we study these things we are studying the properties of an organism, just like its other biological properties. They had an evolutionary origin in mutation and natural selection, and they have a biological function (probably to enhance thought, as well as serve in communication). One of the organs of the body, the brain, serves as the organic basis for language, as the heart serves as the basis for blood circulation. So linguistics (descriptive grammar) is not discontinuous with biology but part of biology.  [2] It had conceived itself differently, perhaps out of a feeling that language raises us above the level of the beasts, but in these post-Darwinian times it should be relegated to biological science. Freud had made similar moves in affective psychology; the biological school in linguistics was moving in the same general direction. This broke down the old dualism and established the study of language as a department of biology, even when it came to the fine structure of grammar.

            This is an oft-told tale (though still not without its detractors), but it has not yet colonized the entire intellectual landscape. Recently there has been a movement to classify consciousness as a biological phenomenon: it too is innately determined and biologically functional. Organisms have consciousness the way they have blood and bile—as a result of biological evolution and bodily mechanisms. It is not something supernatural, an immaterial infusion. That certainly seems of a piece with the biological naturalism that has dominated psychology in recent decades, but does it go far enough? Can’t we also announce that phenomenology is a branch of biology? That is, the systematic phenomenology of Husserl is really a form of biology: the very structures of consciousness are biological facts. Husserl doesn’t suspend the natural sciences (the epoche); he promotes one of them. Phenomenology is the study of a biological aspect of the human mind (and bats have their phenomenology too), just as linguistics is the study of a biological aspect of the human mind (and bees have their language too). When Sartre characterizes consciousness as nothingness and explores its modalities he is doing biology, because consciousness is a biological phenomenon—evolved, innately programmed, functional, and rooted in tissues of the body. To be sure, it is not reducible to other biological facts (such as brain structures); it is a biological fact in its own right. But it is a biological fact nevertheless—part of the life of a living thing. Its essence is nothingness, as the essence of the heart is pumping and the essence of the kidneys is filtering. It has a certain natural architecture, established by the genes, in both humans and animals. We certainly don’t choose its essence. In so far as consciousness exhibits universals (intentionality, qualia, transparency), those are biological universals, like the universals of human grammar. Phenomenology thus belongs with psychology as a branch of biology. Biology deals with living things–as opposed to physics, which deals with non-living things—and the mind is an aspect of life. Husserl could have cited Darwin (correctly understood): The Origin of Species of Consciousness. This is not biological reductionism, simply the acknowledgment that biology extends beyond the body. It is not that religion takes up where biology leaves off.

            I take it I am not shocking the reader unduly. Isn’t this all part of our current secular scientific worldview? Biology by definition encompasses the life sciences, and linguistics, psychology, and phenomenology are all parts of the life sciences. Speaking, thinking, and experiencing are all modes of living—what living things do (some of them). They are, as Wittgenstein would say, aspects of our “form of life”, part of our “natural history”. Maybe we need to expand our conception of biology beyond the typical curriculum, but it is not difficult to see that these aspects of our nature properly belong to biology, broadly conceived (certainly not to the physical sciences). However, I now wish to assert something that may strike readers as pushing it just a bit too far: philosophy too is a branch of biology. I don’t say this because I think philosophical questions reduce to biological questions; I say it because of the methodology of philosophy. We hear about the “linguistic turn” in philosophy—using the study of language as a means of arriving at philosophical conclusions about ground-floor questions. But given the biological turn in linguistics this implies that philosophy has already turned into a branch of biology. Language is a biological phenomenon and it is held to be the foundation of philosophy, so philosophy is based on a sub-discipline of biology. If the logical form of sentences is deemed central to philosophy, then it is the form of a biological entity that is in question. Logical form, like syntax, is an aspect of an evolved and biologically based entity—the architecture of a biological trait of humans. If speech acts are deemed central, then this aspect of living things will assume methodological importance—as opposed to acts of reproduction or respiration or excretion. The combinatorial power of language has rightly received considerable attention, but this too is an evolved biological trait. The biological turn in linguistics combined with the linguistic turn in philosophy together imply the biological turn in philosophy.

            But what if we reject the linguistic turn? What was it a turn from? Mainly it was a turn from a more direct investigation of concepts. But investigating concepts is also investigating a biological phenomenon. Let me put it bluntly: a concept is a living thing. A concept is like a cell of the mind (and note that biological cells were so called because of their resemblance to the living quarters occupied by monks). Concepts are the units that make up thoughts and other mental states, as words make up sentences. Concepts have functions, they evolved, and they are rooted in organic structures of the brain. So when we study concepts philosophically we are studying entities as biological as blood cells or enzymes. We scrutinize these things for their philosophical yield, not for their contributions to biology as such, but they are still biological entities. To be sure, we are interested in their content not their physiology, but having content is just another biologically fixed fact about them. Even if you think concepts are acquired by abstraction, they are still entities that exist in the context of a living organism (like big muscles or manicured nails). Conceptual analysis is the dissection of a biological entity; it is not the examination of a disembodied abstract form. There might be such forms, but they must be reflected in the natural traits of organisms at some level. We have no trouble recognizing that an animal’s concepts are biological forms; human concepts are not different in kind. Bee philosophers can reflect on their bee concepts (or turn their attention to bee language), and human philosophers are in the same case—reflecting on their biologically given traits.  [3] How they do that must also be rooted in biology, but the important point is that thinking is a biological fact; and in so far as philosophy concerns itself with “the structure of thought” it is a biological enterprise. The results don’t concernbiological matters, as opposed to matters in the world at large, but the method involves surveying a certain class of biological entities. Analyzing a concept is analyzing a living thing—as much a living thing as any organ of the body. Our intellectual faculties are indisputably aspects of our life as organic beings, and concepts are just their basic components—as cells are the basic components of bodily organs. It follows that philosophy is (a branch of) biology. Philosophy could be called conceptual biology.

            I want to emphasize how biological concepts are. First, they arise through the evolutionary process (though we have little understanding of how this happened). Second, they are manufactured during embryonic development as a result of genetic realization (or if you think they are acquired later, it is by biological means, e.g. abstraction). Third, they have a biological function—to enable thought, which enables rational action. Fourth and crucially, they must be realized in some neural mechanism that enables them to have their characteristic features, chief among which is their combinatorial powers. The neurons must be able to hook up with other neurons so as to produce complex thoughts; and this hooking up must respect the logical relations inherent in thought (it’s not just a matter of brute aggregation). There must be a physiology of thinking, and specific to thinking. So concepts cannot somehow float above the biological substructure; they depend upon it. Presumably this implies some sort of hidden structure to concepts analogous to the hidden structure of the cell (nucleus, mitochondria, etc.) Concepts are biological through and through. So if they are what philosophy investigates philosophy is up to its ears in biology. It would be different if philosophy could pursue its interests without recourse to concepts, say by simply looking at the extra-conceptual world, but that idea is hopelessly wide of the mark. And even if you think that someparts of philosophy require no reference to concepts, much of it clearly does (the parts that expressly analyze concepts, in particular). Philosophy is thus one of the life sciences and should be understood as such. There are the sciences of the inorganic world—physics, chemistry, astronomy, geology—and there are the sciences of the organic world—zoology, biology, genetics, biochemistry: and within this broad grouping linguistics, psychology, phenomenology, and philosophy fall into the latter category. As I say, this is no form of biological reductionism or determinism, simply a taxonomic observation. It is making explicit what has been implicit since the time of Darwin.  [4]

            I want to end with a point about mathematics. The kinship between mathematics and philosophy has long been recognized; in particular, the status of mathematics as a non-empirical conceptual inquiry makes it similar to philosophy. So is mathematics also a department of biology? Well, if we view it as investigating the implications of basic mathematical concepts it presumably is, for the same reasons philosophy is. Mathematical concepts are products of evolution too, and they must have an underlying physiology. They too are living things. To the extent that mathematical concepts are part of the subject matter or method of mathematics, that subject is also fundamentally biological. Suppose mathematical ideas are innate, just as the classical rationalists supposed; then they must have evolved by mutation and natural selection, become genetically encoded, and matured in the individual organism’s brain to become the conscious entities we now know. Investigating these concepts is thus an exercise in biological exploration—discovering what these evolved traits have hidden in them. How they evolved we don’t know, but if they did evolve then mathematics is another kind of life science, mathematics being part of human life. The concept of number, say, is part of our evolved form of life (quite literally). Counting is like speaking—a human universal. Mathematical theory is the spelling out of the mathematical concepts we inherited from out ancestors.

 

  [1] See Eric Lenneberg, Biological Foundations of Language (1967) and Noam Chomsky, Language and Mind (1968), and many other works.

  [2] The work of Ruth Millikan is also an instance of the biological turn in linguistics and psychology, to be set beside Lenneberg and Chomsky. The biological concept she emphasizes is function as distinct from innateness. 

  [3] No one would doubt that the study of bee language belongs to biology (zoology to be precise), but it took some persuasion to get people to accept that human language is part of human biology (zoology). If bees had philosophers it would be clear enough that these philosophers are studying a biological phenomenon—bee language or bee thought. Is it that there is resistance to the very idea of human biology?

  [4] In retrospect we can see the work of Locke and Hume (among others) as a form of human biology: they undertook a naturalistic study of the human mind, turning away from scholastic essences and the like. If they had known about Darwin, they might have welcomed the biological naturalism inherent in his work.

Share

Philosophical Economics

                                   

 

 

Philosophical Economics

 

 

Economics tells us that an economic transaction involves the sale (or exchange) of “goods and services”. This phrase invites conceptual scrutiny. It is notable that an evaluative term is used to describe the commodities sold: goods are good.  [1] Services also are inherently valuable: you don’t perform someone the service of executing or robbing him (disservice, yes). What kind of good do goods possess? Not a moral good, evidently, since you can’t sellsomeone a moral act or benefit—that would nullify the morality of it. There are no shops where you can go and purchase a moral favor or pay for a moral obligation to be met. Moral goods are trans-economic. Altruism is not a commodity to be bought and sold, on pain of not being altruism. No, goods in the economic sense are goods forsomeone: an economic good is good for its recipient—it does the recipient some good. Thus food, furniture, flowers, and phones: they are purchased because of the personal benefits they afford. But are they really distinct from services? Aren’t all goods imbued with service? Typically, they are manufactured, or at least harvested or mined—they involve skilled human labor. They are not independent of human work, but an expression of it. So they are shaped by “service”, unlike volcanoes or seas or trees (unless cultivated). I would say that all economic goods involve service in this sense; they are not just lying around for anyone to pick up (why pay for them then?). Goods imply services. But what about vice versa? A service is a type of human action with a certain result deemed desirable—say, a massage or a waiter bringing food. But don’t these involve goods? The goods would be muscular therapy and relaxation via pressure, or food being on the table. A teacher supplies the good of information while performing the service of teaching.  A lawyer provides the good of contracts. A doctor provides the good of health by prescribing medicine. Something worthwhile results from the service provided: these are the goods we purchase. A service is no use unless it produces a tangible good. So services involve goods. If someone provides you the service of fixing your fence, he has at the same time given you something good—an intact fence. There is no separating goods and services; there is no essential duality here. There are not two types of entity in an economic transaction, but a mixture of the human act and natural raw material. Conceptually, we could unite goods and services under a single heading—say, “products”. People sell products of different types, which might be called “goods and services”. Goods come modified by service, and services are mingled with goods.

            What is it that we are ultimately purchasing when we engage in an economic transaction? What is it that we desire when we hand over money? Is it other people’s actions and the physical things they produce? No, what we are purchasing are states of mind—that is the point of the whole transaction. If actions and things had no impact on our state of mind, we would have no interest in buying them. We buy things for pleasure, security, pride, company, joy, excitement, comfort, satisfaction, etc.  [2] That is what we are ultimately purchasing—goods and services are just the means to achieve these desirable states of mind. Economic value is psychological value. We could summarize this list by saying that we are buying happiness (though this concept is obscure and includes many disparate psychological states). And what do we offer in return? We hand over money obviously, but why does the vendor want our money? To buy happiness, of course: money is what we use to buy goods and services that produce (we hope) happiness. So we buy happiness by offering happiness in return (this is even more obvious when bartering is involved). An economic transaction is thus an exchange of (hoped for) happiness, i.e. psychological states deemed desirable. An economic system, such as capitalism, is a means of generating and exchanging happiness. Goods and services are happiness-vehicles, external means to an internal end. The price of a product is ultimately determined by the happiness it can produce in the purchaser. The entire material substructure is a just a means to allocate happiness through economic activity. The basic commodities are states of mind. The speech act that defines economic exchange is: “I will give you this happiness if you will give me that happiness”. The thing about goods that is good is precisely their effect on mental states—they improve psychological wellbeing. For example, if I exchange with you a guitar for a surfboard, I have traded one sort of happiness for another, giving you happiness in return—the happiness associated with a guitar or a surfboard. There would be no point in the exchange otherwise. Thus economics is psychology in a very direct sense—it is trading in mental states.

            Interestingly, no animals operate with an economic system, despite having quite sophisticated mental abilities (such as mind-reading). Animals don’t buy goods and services from each other, though they may exchange goods and expect favors in return. The concept of money is alien to the animal mind. Presumably humans developed economies at some specific period of history, where there was none before. How that happened is shrouded in mystery, but clearly it requires sophisticated social consciousness. When do children begin to understand economic exchange? Economies are biological adaptations with biological payoffs and must have arisen by mutation and natural selection. Perhaps we have an innate economics module in our brain (“the economic gene”). It sits next to our theory of mind module. It requires a tacit understanding of how minds work and how they relate to the material environment, as well as an appreciation of evaluative concepts. Economic transactions now constitute a large part of human interactions, and they shape the way we think of others (perhaps too much). We are always thinking of how to improve our state of mind by entering into economic exchanges with other people, which requires thinking about their state of mind too. We monetize the mind—put a price on it.

            It is useful to keep these points in mind when running a business. We need to remind ourselves of what we are really buying and selling—what the true meaning of the phrase “good and services” is. It isn’t the object as such that is important but its effect on the consumer—what good it will do for the consumer, psychologically speaking. It is the meaning of the product that matters. And it isn’t just what is really good for the customer that counts but what the customer believes is good: if the customer doesn’t think that something genuinely good is really good, he or she will not buy it. This is why persuasion is always part of a functioning economy. Savvy advertisers know this very well, so they draw attention to the psychological benefits to be derived from a particular product—not its physical characteristics. A successful business must therefore be psychologically astute and psychologically attuned. It should also have a clear philosophical understanding of what it is up to.

 

  [1] We speak of “dry goods” but we don’t speak of “wet goods”. Why?

  [2] We also buy things to protect our lives, but we only value our lives because of the states of mind they make possible.

Share

Phenomenological Ignorance

                                   

 

 

Phenomenological Ignorance

 

 

We can’t know what it’s like to be a bat. This is an instance of a more general truth: no one can grasp the nature of experiences that are radically different from their own. We can grasp the nature of experiences similar to our own, but we can’t grasp experiences that are qualitatively different from ours. We are ignorant of phenomenological facts that diverge from our own. Bats can know what it’s like to be a bat, and so presumably can dolphins, which employ a similar echolocation sense; but beings that have no such sense are in the dark about the experiences involved. It is the same story for the congenitally blind: they can’t know what it’s like to see—as the deaf can’t know what it’s like to hear, or the pain-free to understand what pain is, or the nasally challenged to appreciate smells, or the emotionless to know what anger is. In the realm of the phenomenological there are sharp constraints on what is knowable and by whom. You can’t even know what it’s like to experience red if you have only experienced blue. This is an epistemic limitation—a limitation on what can be known, understood, or grasped. It is not an absolute limitation—a reflection of the intrinsic nature of the fact in question—since it can be overcome by creatures that happen to participate in that fact; it is a relative limitation—X can’t be known by Y (though it can be known by Z). It isn’t universal ignorance but creature-relative ignorance.

            The question I am concerned with is why such ignorance exists: what is its explanation? We have a kind of extrapolation problem: how do I move from knowledge of my own phenomenology to knowledge of the phenomenology of others? It appears that I can do this when there is similarity, but not when there is (radical) difference. My ability to extrapolate is blocked by dissimilarity. The question is why such extrapolation limitations exist. To see the problem let us review some cases in which there are no such extrapolation restrictions. Consider geometry: are we limited only to knowledge of shapes we have encountered? Are alien geometries incomprehensible to us? We have certainly not experienced all possible polygons, so what about those that lie beyond our geometrical experience? Here the answer is obvious: we are not so limited. To simplify, suppose a person never to have experienced rectilinear figures but only curvilinear ones, so that he has never seen a triangle (say). Does that mean he can’t understand what a triangle is? No, it can be explained to him perfectly well and he will thereby understand the word “triangle”. So while we can’t grasp a type of experience we have never encountered in ourselves, we can grasp a type of geometrical figure we have never encountered in the perceptible world. We can extrapolate in the latter case but not in the former. We don’t have acquaintance-restricted knowledge in geometry, but we do in phenomenology. There are gaps in our understanding where experience is concerned, but not where shapes are concerned. It is the same for animal species: you don’t need to have seen an elephant to know what an elephant is (or a bat). Elephants can be described to you, pictured, and imagined; and they don’t need to be similar to animals you have seen with your own eyes. You know what an animal is and you understand what kind of animal an elephant is by description. But you don’t know what kind of experience a bat has even though it has been described to you (based on echoes, having such and such brain correlates, etc.). Also: suppose you had never heard of odd numbers, having been brought up only to deal with even numbers. That would not prevent you grasping the concept of an odd number once someone explained it to you. There are no irremediable gaps in our grasp of numbers analogous to the gaps in our grasp of phenomenology. Likewise, our knowledge of astronomy is not limited by the extent of our acquaintance: we grasp the concept of remote and alien galaxies without ever experiencing them. But our general concept of experience doesn’t enable us to fill in the gaps in our acquaintance with experience: we can’t say, “Oh, bat experience is simply this” and feel that we know what we are talking about. Our knowledge of phenomenology is thus gappy in a way our knowledge of other things is not. We can’t use a form of induction to extrapolate to types of experience that we have not ourselves directly (introspectively) encountered. The question is why. And the question should seem pressing, because the epistemic limitation is so anomalous and local—in general, there are no such limitations on knowledge.  [1] It is surprising that we don’t know what it’s like to be a bat.

            We must canvas some putative explanations. One possible explanation is that experiences concern the mind, while the other cases I mentioned concern the non-mental world. But this explanation is inadequate because (a) some facts about the mind are not so limited and (b) there are facts about physical objects that are subject to the same limitation. I can understand what beliefs you have even though they are quite alien to me—their odd content isn’t an obstacle to my knowledge; and I can’t grasp a color that I have not seen if it is different from any I have seen. Color blindness will result in color ignorance, even though colors are perceptible properties of physical objects; but my unfamiliarity with crazy conspiracy theories isn’t an impediment to my knowing what weird belief is in question. So the epistemic limitation we are interested in isn’t just a reflection of a general truth about knowledge of the mind versus knowledge of non-mental things. But even if it were such an instance that would not answer our question, because that question would now shift to the more general question: how come we can extrapolate about things outside the mind but not things inside the mind? What is the source of that difference?

            A more promising suggestion is that the realm of experience is simply less homogeneous than the realm of the physical (to speak loosely), so that it would involve greater cognitive leaps to extrapolate across this realm. Geometry is about essentially similar things while phenomenology includes very diverse things. But by what criterion is bat experience so different from (say) visual experience while circles and squares are deemed essentially similar? The concept of similarity will not bear this kind of weight. Some people have urged that bat experience is not really all that different from ours: it is a type of auditory experience for one thing, and for another it has many of the properties of visual experience (a distance sense used to navigate and locate objects in space). These points may be conceded while still insisting on the alien character of such experience: but then how are triangles and circles to be supposed more similar? One loses one’s grip on what notion of similarity is at issue here. There is really no objective basis for distinguishing the cases; the difference arises rather from our mode of knowledge in the two cases. The phenomenological realm is not objectively more diverse than the geometrical realm (or the mathematical realm or the zoological realm); it is rather that our method of knowing somehow differs—we find it easier to extrapolate in the one case than the other. But why is that?

            Along the same lines it might be said that we are actually just as limited in geometry as we are in phenomenology, because geometry also includes extreme knowledge-blocking diversity. Thus non-Euclidian geometry might be said to differ dramatically from Euclidian geometry—as radically as bat experience differs from human experience—so that it is impossible to extrapolate from one to the other. Accordingly, we don’t really grasp non-Euclidian geometry, just as we don’t grasp non-human phenomenology. Since there is then no epistemological asymmetry between the cases, there is nothing to explain—no epistemological anomaly to account for. Alien geometry is as incomprehensible as alien phenomenology (and the same might be said for such things as irrational numbers or alien types of animal). The weakness of this position is that it is by no means clear that there is any epistemic limitation attending the allegedly alien types of fact. We do grasp non-Euclidian geometry (and irrational numbers and the platypus). So the epistemic asymmetry still exists in undiluted form. The puzzle thus persists as to what the basis of the asymmetry might be: why is it harder to know one thing than the other? What makes alien phenomenology peculiarly recalcitrant to understanding?

            Here is a completely different approach: alien phenomenology is like alien language. Humans are born with a specifically structured language capacity that prepares them for the particular languages they will encounter, but it is not suitable for the acquisition of languages with a different kind of structure. The human language faculty will not work to produce knowledge of alien grammars—as it might be, non-discrete elements that combine according to quite different grammatical principles from those of natural human languages (no recursion, for example); or don’t combine at all. It is dedicated and differentially structured, not an all-purpose learning device. If you place a human infant in a linguistic environment that is radically alien, she will not end up with knowledge of the language in question. Suppose bats were to speak such a language: the human child would not come to know its grammar and speak it like a native upon exposure to that language. Linguistic knowledge is thus subject to epistemic limitation as part of its innate character. At best a person might laboriously decipher the grammar of a radically alien language and speak it awkwardly and unnaturally—rather as someone might develop an abstract and unintuitive conception of bat experience. So the suggestion is that the reason we don’t grasp bat phenomenology is that our innate phenomenology module isn’t designed to extend to types of phenomenology that are alien to our own. That is, our innate knowledge of phenomenology is restricted to types of mind whose phenomenological “grammar” matches our own. It is not that alien grammars are objectively more difficult or complex than human grammar; it is just that there is a bias built into the human language module that favors one type of grammar over others. From an evolutionary point of view, it is important for us to have a solid grasp of our own minds (“theory of mind”), so we are genetically equipped with such knowledge; but there is no biological reason to have a solid grasp of bat minds, so lacunae there are acceptable. It is not as if there has been natural selection operating on humans to improve their grasp of bat psychology! Human phenomenological knowledge is domain-specific and geared to our environmental niche, so it is simply not designed to cover bats and their ilk.

            This theory of phenomenological ignorance has the look of what we are seeking, but it might be wondered whether it is strong enough to deliver the epistemic limitation that apparently exists. In the language case, as noted, it is possible in principle for us to overcome our innate bias and acquire knowledge of the grammar of an alien language, albeit laboriously; but is it possible in principle to come to know what it’s like to be a bat? Isn’t that limitation a lot harder to overcome? I don’t know the answer to this: I don’t know whether intensive training, especially during the sensitive periods of child learning, could yield intuitive knowledge of bat phenomenology. Certainly, given that the experiential modality is auditory, the building blocks are there, and maybe training in echo-navigation in the first few years of life could produce a sense of the structure and operations of bat experience (hearing aids would help). So the obstacle may not be insurmountable. Also, we can surely imagine beings that can’t overcome their linguistic bias and so can’t learn an alien language even in principle. So the cases might not be as all-or-nothing as they seem at first sight. The idea of an innate phenomenology module certainly seems intelligible enough, and it delivers an explanation of the puzzling asymmetry I have noted. Just as we have an innate module for our belief-desire theory of mind, so we have an innate module for our phenomenological theory of mind. We could have been born knowing what bat experience is like, as we are born knowing what human experience is like; but actually we aren’t and that produces the epistemic gaps in question. Our knowledge of geometry, arithmetic, and zoology is different, not being based on a selective module like the language faculty (or not as selective); but our knowledge of phenomenology is sharply constrained and not easily overcome (if at all). We are not, as they say, plastic when it comes to phenomenology.

            Notice that according to this theory it is not really correct to suppose that our knowledge of phenomenology is based on our acquaintance with our own experience. That is an empiricist theory of such knowledge analogous to empiricist theories of knowledge of the external world (we possess concepts by abstraction of properties from perceived particulars). But the nativist theory of phenomenological knowledge holds instead that we have such knowledge independently of acquaintance with our experiences. We know what experiential types are without any such operation of acquaintance and associated abstraction, but innately. So the problem isn’t that what we abstract from own experience doesn’t fit the experience of bats, but rather that we are not innately equipped with a faculty of knowledge that includes the knowledge in question. In short, we don’t know bat psychology innately. No doubt acquaintance plays a triggering role in the production of phenomenological knowledge, but it doesn’t play an originative role (as with other innate knowledge systems). Maybe even producing actual bat experience in us wouldn’t itself be sufficient to acquire knowledge of the nature of bat experience, because that requires the cooperation of an innate faculty of phenomenological knowledge geared to bat experiences, which is absent in humans. In any case, an empiricist theory of phenomenological knowledge is by no means to be assumed (if it’s false generally, why should it be true here?).  [2]

            I want to emphasize not so much the proposed solution as the problem it is designed to solve. We know there are phenomenological facts in the world with a nature we can’t grasp, though other beings can; this poses a problem of explanation. The case looks very different from other types of knowledge. It is a non-trivial question why this is so, raising some deep issues. It challenges conceptions of knowledge that have been long entrenched, notably how we know the nature of our own experiences. How exactly do I know what pain is, or experiences of red, or anger?

 

  [1] I don’t meant there are no other limits to knowledge, just that there are no limits of the kind we find with respect to phenomenology, i.e. extrapolation problems across a single domain.

  [2] It is an interesting fact that an empiricist theory of phenomenological knowledge is attractive even to people who are skeptical of an empiricist theory of knowledge of physical objects. I’m not sure why this is—is it because it is easy to conflate experience with knowledge of experience? In fact, actual bats may not know what it’s like to be a bat simply because they lack this kind of introspective knowledge, despite possessing the corresponding experiences.

Share

Perceptual Duality

                                                            Perceptual Duality

 

The traditional distinction between primary and secondary qualities has clear implications for the nature of perception. Primary qualities are possessed by objects independently of perceivers and do not owe their existence to perceivers: they are objective. Secondary qualities are dependent on perceivers, being projected by the mind onto objects that otherwise would lack them: they are subjective. Thus percepts have a subjective-objective duality: they are part subjective and part objective. For example, we see colors and shapes together, the former being contributed by the mind, the latter by the world outside the mind. These qualities become objects of perception by two different mechanisms: color qualities are manufactured by the mind and an operation of projection “spreads” them onto the physical world, while shape qualities impinge on the mind from outside and become perceptual contents by an operation of “imprinting” or “abstraction. Colors we impose on the world, while shapes impose themselves on us: from inside out and from outside in. Nevertheless the qualities interlock in a visual percept: we see color and shape simultaneously and side-by-side—we see colored shapes. Thus perceptual content is hybrid, double aspect, a mixture of opposites. We see both what comes from within and what comes from without. No doubt much of this is puzzling and problematic, even mysterious: how exactly does the projection operation work, and what is the process of abstraction? But it is commonly assumed that something like this dual picture must be true, given the prior distinction between primary and secondary qualities. Part of perception derives from the mind and part derives from the world, yet the two sources are miraculously combined into a unitary percept.

            Not everyone agrees with this bipartite picture. The unreconstructed naïve realist will insist that all the qualities manifest in perception reflect external conditions of objects—including colors, sounds, tastes, smells. Nothing comes from us; everything enters the mind from outside. This theorist will resist the idea that perception involves any sort of error, as that objects seem to have colors they don’t really have. Perception is simply openness to a world that exists whether we exist or not. Color is as objective as shape. On the other hand, a Kantian will maintain that the content of perception is wholly subjective—it’s all like color considered as a secondary quality. Shape qualities derive as much from the mind as color qualities. Thus there is no perceptual duality: subjectivity applies across the board. Both positions cleave to homogeneity of perceptual content: all objective or all subjective. But the position sketched above rejects this homogeneity, holding that perception is essentially a joining of disparate qualities. Indeed, it is typically assumed that the mind cannot of its own devices produce shape content, and also that the world cannot produce color content. We have to wait for the world to create shape percepts in us, but we can (and must) proceed on our own when it comes to color, the world being impotent to produce color percepts.

            The dual aspect picture suggests that our perceptual faculties first present a kind of bloodless sketch to us, which we then fill in by supplying qualities not present in it—like coloring in an outline. The primary qualities impinge on the mind and offer themselves for representation, but we have to complete the picture with appropriate secondary qualities. Of course, this process doesn’t involve any temporal separation between the initial sketch and the final coloring in, as if we had to wait for the colors to arrive, for we can make no sense of the idea of seeing shapes without accompanying colors (nor vice versa). But, logically speaking, there are two separate mental operations, of introjection and projection, respectively (maybe God can see objects without colors and can apply colors to objects at will). This is doubtless all very curious, but seemingly unavoidable, once we accept the picture of perception suggested by the traditional distinction. There must be some way in which subjective and objective qualities magically come together–coalesce, fuse– because we see both colors and shapes together, despite their being ontologically quite different. The inner and outer must be made to interlock in such a way that the unity of the percept is preserved.

            It seems to me that the situation is sufficiently problematic that we should seek an alternative. I take it that the subjectivity of color is undeniable, so I reject the type of naïve realism mentioned above; but the Kantian line is worth pursuing. So let’s try out the following idea: the shape qualities we perceive are also products of the mind, derived from its own inner resources, just like color qualities. There are serious problems with the notion that we obtain ideas of shape by abstraction from experience; and there are reasons to suppose that the shapes we see are not the shapes exemplified in the objective physical world. I won’t say much about these points because they are well known; my question is whether adopting the Kantian line eases the problem of perceptual content—what we might call “the combination problem”. So, by way of reminder: it has been widely contended that the geometry of the physical world does not fit the perceptual geometry we naturally bring to it (non-Euclidian versus Euclidian, roughly), so the geometrical figures we perceive are not identical to those existing in objective reality. This is easy to accept for other species: we don’t suppose that the visual experience of a mouse or a snake correctly captures the true geometry of the material world. The mind (brain) invents its own geometry, geared to its biological requirements, and the corresponding qualities may be as subjectively constituted as color qualities. Secondly, some of the primary qualities we perceive simply don’t belong to the objective material world, so they must be contributed by the mind–for example, solidity and smoothness. Objects don’t have the kind of continuous structure they appear to, and they don’t have smooth edges. The world is more granular than it appears. We attribute these qualities to things in virtue of the perceptual apparatus we possess, so the mind (brain) must be generating the corresponding qualities: we see things as solid and smooth, and hence ascribe those qualities to things. We can’t extract these qualities from objects, since they don’t have them, so we must get them from somewhere else, namely from our own resources. There are also the well-known arguments from illusion: when you see a penny as elliptical the property you see is not present in the object, so it can’t derive from that object; it must come from within. The shape you see is not the shape the penny objectively has. In addition, perceptions of color must of necessity also be perceptions of shape, and colors are endogenously derived, so it makes sense that both should have the same source (what about hallucinations of red cubes before any cubes have been seen?). There seems every reason to believe that our perception of shape issues from an internal system that contains shape representations without reliance on an outside stimulus; and every reason to suppose that the qualities involved do not coincide with (are not identical with) the actual geometry of the physical world. The color system and the shape system are essentially connected, with both projected onto the world in a single package.

            There are two things I am not saying here. One thing I am not saying is that the shape qualities we perceive are secondary qualities in the manner of color qualities, i.e. dispositions to cause experiences in perceivers. That looks like a false analysis in view of the non-relativity of attributions of shape. The qualities may be projected but they are not dispositional in this way (though they may be dispositional in other ways). The question of analysis is not the same as the question of origin or coincidence with objective traits of reality. Second, I am not saying that there is no systematic correspondence between perceived shape and objective shape. No doubt there is a close correspondence between them, given that perceivers have to live in a world of shaped objects and can’t afford to be completely wrong about their shapes. It is not that Euclidian and non-Euclidian geometry have nothing in common; in fact, they converge in many ways. The qualities we perceive act as surrogates for the objective traits of things, though the two are not identical. Just as colors correspond to wavelengths, so shapes (as perceived) correspond to the actual geometrical properties of things (which may come down to potentials in a force field). Objects certainly have edges, but the physical reality of edges need not coincide with edges as they are perceived.

            Does this position deserve to be described as idealist? Yes and no. No, in that it doesn’t reduce reality to purely mental reality (there is a non-mental external world out there); and no, in that the shape qualities are not to be conceived as themselves mental (unlike experiences of them). But yes, in that the world we immediately perceive is a world of our own devising: its constituents (sensible qualities) derive from within the mind and are not found in objective reality. The world we immediately perceive is a projected world, much as Kant supposed. There are in fact two worlds: the phenomenal world of perception and the objective world that lies on the other side of perception. We can say that objects have the qualities we perceive them to have, but this is so only by dint of the (mysterious) mental act of projection—just as they have colors. It is just that objects don’t have these qualities independently of the minds that project them (unlike the qualities attributed by physics). We can even stick to a version of naïve realism, since we see non-mental qualities that objects have (albeit derivatively), not mental sense-data that intervene between the mind and non-mental reality. The mind has access to qualities (a type of universal) that it projects onto the world, but these qualities are not aspects of the mind—they are not sensations.

            We should accordingly reject the old dichotomy of primary and secondary qualities in favor of a threefold distinction. There are the intrinsic properties of objective reality, the kind of thing talked about in physics (hopefully)—mass, charge, fields, etc. Then there are the properties that consist in dispositions to appear in certain ways—secondary qualities in the traditional sense. But third, there are properties like perceived shape that belong neither to objective reality nor to the class of dispositions to appear; these properties fit neither of the traditional categories. This third class is more closely interwoven with objective reality than the usual secondary qualities, which track nothing very significant in the external world (hence the non-relativity I mentioned earlier). The essential point is that perceived shape does not coincide with objective shape (and similarly for other so-called primary qualities like solidity, length, volume, etc.). The shapes (etc.) that we see are not the shapes that populate the objective universe. The class of properties traditionally counted as primary is thus more heterogeneous than was thought in the seventeenth century, when mechanism was still dominant and modern physics hadn’t questioned so much of common sense. It tended then to mean “any property of objects that is not secondary”. The possibility that the ontology of physics might be far removed from common perception was not really contemplated, so the assumption was that the way we see objects would figure in the correct science of those objects. It was Kant who began the movement to separate the phenomenal world from the objective world described by physics. The unexpected upshot is that we are now free to reject the double aspect theory of perceptual content, by regarding all such content as the product of the mind. We needn’t entertain the problematic idea of the subjective-objective amalgam.

            Let me try to make the position vivid by imagining a toy world. In this world the geometry of reality is some radically non-Euclidian geometry that is not even capable of being perceptually represented by the creatures living there—perhaps it has 10 dimensions. The creatures nevertheless need a way to perceive their environment, so they invent (or their genes invent) a perceptual geometry that is psychologically manageable and succeeds in tracking external reality well enough to survive. The qualities denoted in this system are not found in objective reality, though they characterize how that reality is perceived. In this world (perceived) shape and color both originate in the mind, so there is no combining of the subjective and the objective, the endogenous and the exogenous. Then that is how it is in our world: we have constructed perceptual representations that serve our purposes but which don’t coincide with, or derive from, the objective features of things.     

 

Co

Share

Pain and Unintelligent Design

 

 

 

Pain and Unintelligent Design

 

 

Pain is a very widespread biological adaptation. Pain receptors are everywhere in the animal world. Evidently pain serves the purposes of the genes—it enables survival. It is not just a by-product or holdover; it is specifically functional. To a first approximation we can say that pain serves the purpose of avoiding danger: it signals danger and it shapes behavior so as to avoid it. It hurts of course, and hurting is not good for the organism’s feeling of wellbeing: but that hurt is beneficial to the organism because it serves to keep it from injury and death. So the story goes: evolution equips us with the necessary evil of pain the better to enable our survival. We hurt in order to live.  If we didn’t hurt, we would die. People born without pain receptors are exceptionally prone to injury. So nature is not so cruel after all. Animals feel pain for their own good.

            But why is pain quite so bad? Why does it hurt so much? Is the degree of pain we observe really necessary for pain to perform its function? Suppose we [encountered alien creatures much like ourselves except that their pain threshold is much lower and their degree of pain much higher. If they stub their toe even slightly the pain is excruciating (equivalent to us having our toe hit hard with a hammer); their headaches are epic bouts of suffering; a mere graze has them screaming in agony. True, all this pain encourages them to be especially careful not to be injured, and it certainly aids their survival, but it all seems a bit excessive. Wouldn’t a lesser amount of pain serve the purpose just as well? And note that their extremes of pain are quite debilitating: they can’t go about their daily business with so much pain all the time.  If one of them stubs her toe she is laid off work for a week and confined to bed. Moreover, the pain tends to persist when the painful stimulus is removed: it hurts just as much after the graze has occurred. If these creatures were designed by some conscious being, we would say that the designer was an unintelligent designer. If the genes are the ones responsible, we would wonder what selective pressure could have allowed such extremes of pain. Their pain level is clearly surplus to requirements. But isn’t it much the same with us? I would be careful not to stub my toe even if I felt half the pain I feel now. The pain of a burn would make me avoid the flame even if it was much less fierce than it is now. And what precisely is the point of digestive pain or muscle pain? What do these things enable me to avoid? We get along quite well without pain receptors in the brain (or the hair, nails, and teeth enamel), so why not dispense with it for other organs too? Why does cancer cause so much pain? What good does that do? Why are we built to be susceptible to torture? Torture makes us do things against our wishes—it can be used coercively—so why build us to be susceptible to it? A warrior who can’t be tortured is a better warrior, surely. Why allow chronic pain that serves no discernible biological function? A more rational pain perception system would limit pain to those occasions on which it can serve its purpose of informing and avoiding, without overdoing it in the way it seems to. In a perfect world there would be no pain at all, just a perceptual system that alerts us non-painfully to danger; but granted that pain is a more effective deterrent, why not limit it to the real necessities? The negative side effects of severe pain surely outweigh its benefits. It seems like a case of unintelligent design.

            Yet pain evidently has a long and distinguished evolutionary history. It has been tried and tested over countless generations in millions of species. There is every reason to believe that pain receptors are as precisely calibrated as visual receptors. Just as the eye independently evolved in several lineages, so we can suppose that pain did (“convergent evolution”). It isn’t that pain only recently evolved in a single species and hasn’t yet worked out the kinks in its design (cf. bipedalism); pain is as old as flesh and bone. Plants don’t feel pain, but almost everything else does, above a certain level of biological complexity. There are no pain-free mammals. Can it be that mammalian pain is a kind of colossal biological blunder entailing much more suffering than is necessary for it to perform its function? So we have a puzzle—the puzzle of pain. On the one hand, the general level of pain seems excessive, with non-functional side effects; on the other hand, it is hard to believe that evolution would tolerate something so pointless. After all, pain uses energy, and evolution is miserly about energy. We can suppose that some organisms experience less pain than others (humans seem especially prone to it)—invertebrates less than vertebrates, say—so why not make all organisms function with a lower propensity for pain? Obviously, organisms can survive quite well without being quite so exquisitely sensitive to pain, so why not raise the threshold and reduce the intensity?

            Compare pleasure. Pleasure, like pain, is motivational, prompting organisms to engage not avoid. Food and sex are the obvious examples (defecation too, according to Freud). But the extremes of pleasure are never so intense as the extremes of pain: pain is really motivational, while pleasure can be taken or left. No one would rather die than forfeit an orgasm, but pain can make you want to die. Why the asymmetry? Pleasure motivates effectively enough without going sky-high, while excruciating pain is always moments away. Why not regulate pain to match pleasure? There is no need to make eating berries sheer ecstasy in order to get animals to eat berries, so why make being burnt sheer agony in order to get animals to avoid being burnt? Our pleasure system seems designed sensibly, moderately, non-hyperbolically, while our pain system goes way over the top. And yet that would make it biologically anomalous, a kind of freak accident. It’s like having grotesquely enlarged eyes when smaller eyes will do. Pleasure is a good thing biologically, but there is no need to overdo it; pain is also a good thing biologically (not otherwise), but there is no need to overdo it.

            I think this is a genuine puzzle with no obvious solution. How do we reconcile the efficiency and parsimony of evolution with the apparent extravagance of pain, as it currently exists? However, I can think of a possible resolution of the puzzle, which finds in pain a unique biological function, or one that is uniquely imperative. By way of analogy consider the following imaginary scenario. The local children have a predilection for playing over by the railway tracks, which feature a live electrical line guaranteed to cause death in anyone who touches it. There have been a number of fatalities recently and the parents are up in arms. There seems no way to prevent the children from straying over there—being grounded or conventionally punished is not enough of a deterrent. The no-nonsense headmaster of the local school comes up with an extreme idea: any child caught in the vicinity of the railway tracks will be given twenty lashes! This is certainly cruel and unusual punishment, but the dangers it is meant to deter are so extreme that the community decides it is the only way to save the children’s lives. In fact, several children, perhaps skeptical of the headmaster’s threats, have already received this extreme punishment, and as a result they sure as hell aren’t going over to the railway tracks any time soon. An outsider unfamiliar with the situation might suspect a sadistic headmaster and hysterical parents, but in fact this is the only way to prevent fatalities, as experience has shown. Someone might object: “Surely twenty lashes is too much! What about reducing it to ten or even five?” The answer given is that this is just too risky, given the very real dangers faced by the children; in fact, twenty lashes is the minimum that will ensure the desired result (child psychologists have studied it, etc.). Here we might reasonably conclude that the apparently excessive punishment is justified given the facts of the case—death by electrocution versus twenty lashes. The attractions of the railway tracks are simply that strong! We might compare it to talking out an insurance policy: if the results of a catastrophic storm are severe enough we may be willing to part with a lot of money to purchase an insurance policy. It may seem irrational to purchase the policy given its steep price and the improbability of a severe storm, but actually it makes sense because of the seriousness of the storm if it happens. Now suppose that the consequences of injury for an organism are severe indeed—maiming followed by certain death. There are no doctors to patch you up, just brutal nature to bring you down. A broken forelimb can and will result in certain death. It is then imperative to avoid breaking that forelimb, so if you feel it under dangerous stress you had better relieve that stress immediately. Just in case the animal doesn’t get the message the genes have taken out an insurance policy: make the pain so severe that the animal will always avoid the threatening stimulus. Strictly speaking, the severe pain is unnecessary to ensure the desired outcome, but just in case the genes ramp it up to excruciating levels. This is like the home insurer who thinks he should buy the policy just in case there is a storm; otherwise he might be ruined. Similarly, the genes take no chances and deliver a jolt of pain guaranteed to get the animal’s attention. It isn’t like the case of pleasure because not getting some particular pleasure will not automatically result in death, but being wounded generally will. That is, if injury and death are tightly correlated it makes sense to install pain receptors that operate to the max. No lazily leaving your hand in the flame as you snooze and suffering only mild discomfort: rather, deliver a jolt of pain guaranteed to make you withdraw your hand ASAP. Call this the insurance policy theory of pain: don’t take any chances where bodily injury is concerned–insure you are covered in case of catastrophe.  [1] If it hurts like hell, so be it—better to groan than to die. So the underlying reason for the excessiveness of pain is that biological entities are very prone to death from injury, even slight injury. If you could die from a mere graze, your genes would see to it that a graze really stings, so that you avoid grazes at all costs. Death spells non-survival for the genes, so they had better do everything in their power to keep their host organism from dying on them. The result is organisms that feel pain easily and intensely. If it turned out that those alien organisms I mentioned that suffer extreme levels of pain were also very prone to death from minor injury, we would begin to understand why things hurt so bad for them. In our own case, according to the insurance policy theory, evolution has designed our pain perception system to carefully track our risks in a perilous world. It isn’t just poor design and mindless stupidity that have made us so susceptible to pain in extreme forms; this is just the optimum way to keep as alive as bearers of those precious genes (in their eyes anyway). We inherit our pain receptors from our ancestors, and they lived in a far more dangerous world, in which even minor injuries can have fatal consequences. Those catastrophic storms came more often then.

            This puts the extremes of romantic suffering in a new light. It is understandable from a biological point of view why romantic rejection would feel bad, but why so bad? Why, in some cases, does it lead to suicide? Why is romantic suffering so uniquely awful?  [2] After all, there are other people out there who could serve as the vehicle of your genes—too many fish in the sea, etc. The reason is that we must be hyper-motivated in the case of romantic love because that’s the only way the genes can perpetuate themselves. Sexual attraction must be extreme, and that means that the pain of sexual rejection must be extreme too. Persistence is of the essence. If people felt pretty indifferent about it, it wouldn’t get done; and where would the genes be then? They would be stuck in a body without any means of escape into future generations. Therefore they ensure that the penalty for sexual and romantic rejection is lots of emotional pain; that way people will try to avoid it.  It is the same with separation: the reason lovers find separation so painful is that the genes have built them to stay together during the time of maximum reproductive potential. It may seem excessive—it is excessive—but it works as an insurance policy against reproductive failure. People don’t need to suffer that much from romantic rejection and separation, but making them suffer as they do is insurance against the catastrophe of non-reproduction. It is crucial biologically for reproduction to occur, so the genes make sure that whatever interferes with that causes a lot of suffering. This is why there is a great deal of pleasure in love, but also a great deal of pain–more than seems strictly necessary to get the job done. The pain involved in the loss of children is similar: it acts as a deterrent to neglecting one’s children and thus terminating the genetic line. Emotional excess functions as an insurance policy about a biologically crucial event. Extreme pain is thus not so much maladaptive as hyper-adaptive: it works to ensure that appropriate steps are taken when the going gets tough, no matter how awful for the sufferer. It may be, then, that the amount of pain an animal suffers is precisely the right amount all things considered, even though it seems surplus to requirements (and nasty in itself). So at least the insurance policy theory maintains, and it must be admitted that accusing evolution of gratuitous pain production would be uncharitable to evolution.   

            To the sufferer pain seems excessive, a gratuitous infliction, far beyond what is necessary to promote survival; but from the point of view of the genes it is simply an effective way to optimize performance in the game of survival. It may hurt us a lot, but it does them a favor. It keeps us on our toes. Still, it is puzzling that it hurts quiteas much as it does.  [3]

 

  [1] We can compare the insurance policy theory of excessive pain to the arms race theory of excessive biological weaponry: they may seem pointless and counterproductive but they result from the inner logic of evolution as a mindless process driven by gene wars. Biological exaggeration can occur when the genes are fighting for survival and are not too concerned about the welfare of their hosts.

  [2] Romeo and Juliet are the obvious example, but the case of Marianne Dashwood in Jane Austen’s Sense and Sensibility is a study in romantic suffering—so extreme, so pointless.

  [3] In this paper I simply assume the gene-centered view of evolution and biology, with ample use of associated metaphor. I intend no biological reductionism, just biological realism.

Share

Our Concept of Mind

                                               

 

 

Our Concept of Mind

 

 

How good is our concept of mind—how extensive, how accurate, how penetrating? I shall suggest that it is not very good—limited, misleading, shallow. It is much less good than our concept of body. It covers mental reality only ineptly, incompetently. There are three areas to consider: alien minds, other minds, and our own mind. In each of these areas our concept of mind runs into trouble.

            Consider minds very different from our own: not just bats and dolphins that have different senses from our own but animals in general. I don’t know what it is like to be a bat but I am also pretty clueless about what it is like to be a cat or a dog or a mouse. It isn’t that they have phenomenological types of experience I don’t have; rather, the way the different elements of their mind come together baffles me—their desires, thoughts, and emotions (their “form of life”). To be sure, we have some understanding, but we find their inner life enigmatic, just not close enough to our own for full empathy. This is why we think it would be extremely interesting to become a cat for a while and see the world through cat’s eyes (and ears and nose). Similarly for bees and sharks, eagles and elephants. Our concept of mind fails to reveal the inner lives of other animals, even our closest relatives like apes (though we surely have more insight into their minds than we do reptiles). And we don’t believe that further diligent inquiry will resolve the enigma. By contrast, there is no such limitation in our concept of body: the bodies of other animals are not enigmatic to us (I don’t mean their bodies as lived by the animal in question, which is an aspect of their mind). We have a perfectly clear grasp of the anatomy and physiology of the bat or cat or elephant—as clear as our grasp of the human body. Our concept of body extends smoothly to alien bodies, while our concept of mind falters when it comes to alien minds—we can’t get our minds around theirs. They present themselves to us as areas of ignorance, impenetrability. That is, our cognitive resources in conceiving of mental reality are inadequate to capture the (full) nature of alien minds—which is why we call them alien. It is not as if when confronted by other species we just cheerfully assume that we know just what is going on internally, as someone with more capacious conceptual resources might. We don’t look into the eyes of a cat and feel we know just what she is thinking and feeling—what her feline point of view is. Thus our concept of mind, unlike our concept of body, is anthropocentric, geared to the human, incapable of affording (full) access to the inner lives of other species. It remains partial and glancing, skewed to our specific mode of sensibility. It would not be surprising to discover that we really have no idea what is on the minds of other species of animal; we might be amazed by what we experience if we suddenly became a mouse for a day. With other humans we think we know where we are—where they are—and so we don’t wonder what it’s like to be another human. But as soon as a mind begins to be unlike our own mind we start to lose our grip on it, as we don’t for bodies unlike our own body. Our concept of mind is thus confined and parochial, failing to capture the full extent of mental reality. There is a lot it doesn’t encompass. To put it differently, our concept of mind exhibits cognitive bias, even to the point of cognitive closure in some instances.  

            But even in the case of our fellow humans our concept of mind betrays its fragility. For we have difficulty understanding what it is for other people to have a mind, even one just like ours. What is it that I think when I think that someone not myself is in pain? Wittgenstein has a famous passage about this: “If one has to imagine someone else’s pain on the model of one’s own, this is none too easy a thing to do: for I have to imagine pain which I do not feel on the model of the pain which I do feel. That is, what I have to do is not simply to make a transition in imagination from one place of pain to another. As, from pain in the hand to pain in the arm. For I am not to imagine that I feel pain in some region of his body. (Which would also be possible.)” (Philosophical Investigations, 302) But don’t we conceive of another’s pain precisely by reference to our own? He has what I have when I am in pain except that he isn’t me. Compare: for him to have a self is for him to have what I have when I have a self except that it isn’t my self. I don’t think this way about his body—I don’t think that he has what I have when I have a body except that it isn’t my body. Instead, we have a general notion of body that we apply both to ourselves and to others, without privileging our own body. In the case of mind, however, we start from ourselves and project outward: but, as Wittgenstein observes, there is a question about why this isn’t just conceiving my mind in his body, which is not at all the same thing as his having a mind of his own. Do I really grasp what it is for him to be in pain, as opposed to myself feeling pain in another body? Do I just rely on misplaced projection, failing to grasp the full reality of another mind? Isn’t our concept of other minds irredeemably egocentric (solipsistic)? Think about it: do you really understand what it is for another person to be in pain in just the way you are in pain (but without his being you)? Aren’t you always putting yourself in the other’s place? Again, it is not like the body: here we really do understand the idea of another body, not merely a projection of one’s own body elsewhere, as if the other’s body is somehow an extension of mine. Our concept of other minds seems hazy, confused, not fully up to the job assigned to it—representing the mental reality of others. Our own mind exercises too powerful a hold over our psychological thinking—as it does for alien minds. I can’t abstract my concept of mind away from my own species, and I can’t abstract it away from my own self either—my concept keeps pulling me back to my own mind, refusing to extend to minds beyond my own. To be sure, I have a rough and ready concept of other minds, as I do alien minds, but the concept is inept, incomplete, sketchy, jejune. One might even say that it is childish (and of course young children have a notoriously impoverished understanding of the minds of others). It seems not to have escaped its roots in our own self-representation. To put it simply, we don’t really understand what it is to be another person (self, consciousness). We operate with a patched-up cobbled-together concept based loosely on what we know of ourselves.

            But then isn’t own self-concept entirely satisfactory? Don’t we at least understand ourselves, i.e. what it is for us to have a mental state? Surely I know what it is for me to be in pain! This is tricky territory, but let me offer the following remarks. First, there is the mind-body problem: do my concepts of my own mind enable me to grasp how that mind relates intelligibly to my body? Clearly not, so we have reason to suppose that our concept does not disclose the full reality of what it covers: there is much more to my mind than my concepts reveal (or can reveal). But second, and less familiar, we don’t really have a clear conception of how mental reality in our own case fits into the broader world around us. We don’t clearly see our place in the causal nexus. On the one hand, there are physical objects around us, including our own body, and there is an objective conception of these objects (encapsulated in physics); on the other, there are our inner subjective states that we conceive in a different way entirely. But we can’t integrate these two realms, these two viewpoints. Here is a simple way to put it: while I can observe causal relations between physical objects, I cannot observe causal relations between the mental and physical. Note the word “observe” here: true, I know of such causal relations, but I don’t observe them with my senses. I never see a wound causing me to feel pain, simply because I don’t see my pains. What I have is a kind of mongrel conception of the psychophysical nexus—a bringing together of the objective and the subjective. But the real world must be a unified world, i.e. one in which both mental and physical seamlessly coexist. This means that it must be possible in principle (if not for us in practice) to attain an objective conception of mental reality, so that our limited perspective on our own minds gives way to something more universal (an “absolute conception” in the well-worn phrase). Our present mental concepts fail to provide this detached point of view, because they depict our minds from the point of view of those minds—I describe my mental states as they strike me from the first-person point of view, not as they fit into the broader reality of which they are a part.  [1] Indeed, it is hard for me to think of them as part of reality at all (and easy for me to think of reality as part of my mind): I think of the world as my world, not of my mind as just an element in a far broader totality. It is a strain for me even to acknowledge that my mind is merely one ephemeral speck in a vastly more extensive reality.

            Again, I have no great trouble seeing my body in this way, distressingly so. I see its causal relations to other things and I conceive of it as part of a totality of other physical objects—just one object among others equally real. But I don’t think of my mind in this modest and self-effacing way—I don’t think of it as just another thing in a vast array of things. I think of myself as a throbbing center not as a dot among other dots. My concept of my mind forces me to conceive of it in ways that fail to do justice to its limited and contingent place in the natural order, which is why I find it so difficult to think of myself in this way—and why whole systems of thought have arisen that deny the located and confined nature of mind. In a sense, our concept of mind exaggerates the importance of our own mind. The digestive system we can see for what it is biologically, but the mind resists this kind of demotion, this embedding in the natural biological order. We cannot view ourselves sub specie aeternitatis. And what point would there be in equipping us with such fancy conceptual equipment? Does natural selection care that we can’t represent ourselves form a God’s-eye perspective? Can other animals think of themselves thus objectively? Our concept of mind leaves us in no doubt that our minds are real, but it fails to inform us of how this reality fits into reality as a whole (materialism and idealism try to fill the gap). We are aware that we fit into the natural order, but we have no clear conception of how this fitting occurs, beyond some sketchy ideas of causal connection.

            Mental reality is distributed quite widely in the universe—in oneself, in other humans, and in other species—but our concept of mind fails to encompass this reality. Moreover, the concept fails to locate one’s own mental reality in a broader reality. By contrast, our concept of body doesn’t suffer from these limitations. Thus our concept of mind is limited and defective in important respects. It is not a very good concept. Maybe it could be improved, but as it stands it is rather crude.  [2]

 

  [1] Anyone familiar with the work of Thomas Nagel will recognize these kinds of considerations.

  [2] If we inquire what the biological purpose of the concept of mind is, the following answer seems on the right lines: to inform us of our own mental state, and to enable us to predict the behavior of others. Fulfilling these two purposes requires little in the way of adequate representation of the full nature of mental states. What would knowing what it’s like to be a bat do for us biologically? It’s not as if we have to mate with them! Nature minimizes knowledge where there is no need of it.

Share

Origins of the Free Will Problem

                                   

 

 

 

Origins of the Free Will Problem

 

 

In its modern form the problem of free will is supposed to arise from the scientific discovery (or perhaps scientific presupposition) that determinism is true. It is a tenet of modern science (at least of the Newtonian kind) that every event in nature is the result of universal laws that allow of no exceptions, and this uniformity is supposed to rule out the existence of free acts. If so, the absence of free will is an empirical discovery, because it is an empirical discovery that determinism is true. If physics had turned out differently, we would not have had a reason to deny the existence of free will. In earlier times the threat to free will came from theology in the form of God’s omniscience: if God has complete foreknowledge, then he knows everything a human agent will do; but then there is no free will. Again, this reason to deny free will issues from considerations extrinsic to the concept of free will, in certain facts about God’s nature. If theology had been different, there would not have been a reason to deny free will. We can also imagine another sort of empirical argument for denying free will, viz. that the unconscious, as conceived by Freud, is seething with passions that compel us to act as we do, so that nothing we do is free from such internal coercion. According to this argument, we are forced to act as we do by an unconscious agency that intrudes on our conscious deliberations, robbing us of our freedom. We have the illusion that we are free, as we do under the scientific and theological arguments just mentioned, but in fact we are not—and this is a scientific discovery of psychoanalysis.

            Now I am not concerned here with whether these arguments are sound, or with whether their premises are true—universal determinism, divine foreknowledge, and the all-powerful unconscious. I mention them in order to distinguish them from another type of argument against the possibility of free will, namely that it is inherent in the concept of free will that we are not free. This is what might be called an intrinsic argument against free will, one that stems from the concept itself not from ancillary considerations of an empirical or factual nature. I think this is the more important argument philosophically, but again that is not my primary concern; I wish merely to distinguish the two sorts of argument, as well as to assess the cogency of the intrinsic argument. But first I want to articulate that argument so as to bring out its structure. I also want to note how extraordinary it would be if such an argument were successful.

            There are two components to the concept of free will as we have it, singly necessary and jointly sufficient. The first is that free actions are in some way responsive to desires and other psychological states: we act as we do because of our desires (etc.). There are many ways this responsiveness relation can be characterized but let me simply say that actions are determined by desires, where this does not imply the doctrine of determinism—given the desire (etc.) the action follows.  [1] This property is enshrined in the following proposition: if two individuals are exactly alike in their psychological states, they must act in the same way. An action is not free if it violates this principle, since then it would just be random in relation to desire—as when two psychologically identical individuals with a desire for ice cream act in the one case by buying an ice cream and in the other by tying their shoelaces (say as a result of a nervous spasm). This component of the concept captures the necessary condition that an act is free only if it is in accordance with the agent’s wishes. Thus we could say that the concept of freedom includes “desire-determination”. The second component is that a free action is one that has alternatives: the agent did a certain thing but he could have done otherwise. He had a genuine choice; his particular course of action was not forced on him. He did not act under duress, being given no alternative to what he did. I went to the movies but I could have stayed home and watched TV; no one made me go to the movies against my will. I wasn’t given an offer I couldn’t refuse, either by man or nature. This also is a necessary condition of freedom—that I had alternatives. Call this the “alternatives requirement”. Then we could plausibly claim that desire-determination and the alternatives requirement are individually necessary and jointly sufficient for free action.

            Now we know how the anti-freedom argument will go: these two components are said to be inconsistent with each other. The concept of freedom is contradictory because it combines an insistence on determination with an equally strong insistence that the agent could have acted otherwise. But if the act was determined (fixed, caused by, controlled by) a desire, then it did not admit of alternatives–contrary to the second component. Let us pause to take in how remarkable that conclusion is: we are familiar with philosophical arguments that purport to show that certain concepts harbor hidden contradictions (truth, vague concepts, the concept of a set), but it is another thing to contend that a familiar concept involves a direct contradiction in its very definition, as plain as the nose on your face. Somehow human beings have fashioned a concept that is manifestly contradictory, and yet they have failed to notice this fact. Such stupidity! Such insanity! I mean, what the…  The argument is telling us that our very conceptual scheme, not something arising from an extrinsic fact of physics or theology or psychology, contains a blatant, glaring, and embarrassing logical screw-up. It’s as if we had and used the concept of a “smountain”, where something is a smountain if and only if it is both a very large heap of rocks and also a very small heap of rocks. According to the standard argument, free actions (so-called) involve both determination and the absence of determination: but nothing could ever satisfy those contradictory conditions. Therefore the will is not free and no one ever acts freely.

            As I said, this is an extraordinary result: common sense is grievously contradictory. And on a very important matter too, since moral responsibility hinges on the possibility of free will—you would have thought we would be more careful about making our concepts coherent! We condemn people to severe punishment relying on a concept that is transparently contradictory. This is appalling: but it is good that philosophy exists to expose the conceptual bankruptcy of the whole thing. And the trouble does not stem, forgivably, from an empirical discovery that casts freedom into doubt, or from arcane theology, but from the very concepts we employ every day. This must surely be cause for human shame, wringing of hands, reparations, etc. Thousands of years of obvious conceptual confusion!

            Or perhaps the argument is wrong. Some have contended that one or the other component of the concept is dispensable: desires don’t in any way determine actions, or they do but there is no need for the existence of alternatives. Others have contended that both conditions are necessary but are really compatible upon deeper analysis. I am with these contenders, these reconcilers, but I won’t enter into a detailed defense of that position here. What I will do is offer some sketchy remarks about the notion of being able to act otherwise that I hope will ease the pressure to suppose inconsistency in the concept.

            The first thing to understand is that “I could have done otherwise” does not mean that it is metaphysically possible for two agents to be exactly alike psychologically and yet act differently, still less that they could be exactly alike physically and yet act differently. I have no such thoughts when I reflect that I have alternatives. If I survey my options for what I will do this afternoon—go to the movies, stay home and watch TV, practice my golf swing—I am not contemplating all the ways I can go against my desires and other psychological states; I am enumerating my desires and wondering which one is the most important to me today. I am reviewing my possible choices. If someone forced me to select one of the options, then I would have no choice; but if no one does, then I do have a choice—I have alternatives. This is not a matter of some remarkable kind of metaphysical modality but merely an expression of the fact that it is my desires that count not something alien and inimical to them. The paradigm freedom-destroying agency is someone forcing you to do what you have no desire to do. Freedom is acting on your desires not on someone else’s desires under conditions of duress. But in addition there are many other kinds of freedom-destroyer: not just real threats but perceived threats, internal compulsions like phobias and obsessions, unconscious biases and motivations, motor dysfunctions, insanity, brain washing, hypnosis, epilepsy, cowardice, etc. All these can interfere with the agent’s considered judgment about what it would be best to do.

The concept of being able to do otherwise is a portmanteau concept, collecting together disparate conditions and causal factors. It is imprecise and context-dependent. To say that someone acted freely is to rule out any of an open-ended list of possible disruptive factors; and there will be questions of degree here—how much the agent’s freedom was compromised. But there is no suggestion that universal determinism excludes freedom or divine foreknowledge: when I think of myself as free to act in a certain way I don’t think of myself as unpredictable by God or as hovering above the web of causal laws that govern the universe. I have much humbler matters on my mind. The mistake is to interpret a practical portmanteau concept as a unitary metaphysical concept. To be free is not to escape the causal net or God’s all-seeing eye but to act as you see fit—to do what you want when you want. Neither God nor Newton can take that away from you.

            That is a familiar compatibilist line and I don’t propose to elaborate and defend it further. My main point is that the concept of freedom is not inherently contradictory because it does not imply anything contrary to desire-determination. So our conceptual scheme as it relates to human behavior is not riddled with logical error. The two components of the concept of freedom are not in tension with each other but complement each other. We misunderstood the “grammar” of “I could have done otherwise”, as Wittgenstein would say; and indeed it does have the appearance of a modal claim analogous to “the particle could have gone in a different direction”. But it occurs in its own “language game” and carries no metaphysical punch: that is, its actual meaning is given by the range of things that rule out freedom as ordinarily conceived. But this is consistent with allowing that some extrinsic considerations could rule freedom out: we might not be as free as we fondly suppose. Determinism and divine foreknowledge don’t do this, as compatibilists have long urged, but we can imagine other types of threat, as with the Freudian argument mentioned earlier. Suppose it turned out that human action is largely or wholly governed by unconscious passions that we have no control over; thus I am subject to compulsions of which I have no conscious knowledge. Then it would be plausible to maintain that my actions are not free, or not as free as I thought: I thought sending a birthday card to my father was a free act of kindness, but actually I had a hidden motive that made me select a card that would hurt him deeply; or I play tennis with him not in order to enjoy a game together but to relish (unconsciously) the opportunity of beating him. This really would undermine my freedom, because it would sharply limit the desires I can act on: in the end I am always compelled to act on patricidal desires stemming from my unresolved Oedipus complex, not from other desires I might have. It is as if my unconscious operates as an external agency bending me to its will—I am a puppet not a puppeteer. That would count as a scientific discovery that undermines free will, and it does so within the terms stipulated by the concept. Here incompatibilism would be indisputable. So it is not that the concept of freedom is necessarily immune from skeptical attack—despite being internally perfectly coherent. But the attack has to be of the right form; in particular, it must cast doubt on the idea of genuine alternatives that an attribution of freedom requires. I can undermine a specific attribution of freedom by pointing to external interference or to inner compulsion; well, I could also do this more globally, even to the point of contesting any attribution of freedom. We are not guaranteedto be free. And this is a matter of common sense not sophisticated science or theology or metaphysics. However, and fortunately, no such empirical threat actually exists, since the Freudian picture is not to be believed on empirical grounds—and even Freud didn’t contend that human acts are never free because of the malign effects of the unconscious. The same goes for the idea that we are all so massively brain-washed that none of our actions correspond to our own authentic desires: we really desire such and such, but because of propaganda we never do such and such, instead doing what we don’t really desire to do. In principle that could be so, in which case our freedom would be limited to non-existent, merely an illusion of freedom (but do we really never want to eat?). However, there is no good reason to suppose that this is the situation in which we find ourselves: we actually do have real desires that we act upon without impediment, external or internal. So we are free (maybe some people more than others).

            Potential threats to freedom can arise from various sources, some more persuasive than others. Philosophically the most interesting potential threat arises (supposedly) from the content of the concept itself, which does not depend on contingent facts of the universe. But (a) it is extremely unlikely that so deeply embedded and universal a concept could harbor the kind of contradiction some philosophers have suspected, and (b) it turns out that the alleged contradiction can be defused by paying careful attention to the actual way the concept of alternatives is employed. Still, it is somewhat surprising that the concept should be as problematic as it has proved to be—so prone to misunderstanding. It is not as if the anti-free will argument immediately strikes us as absurd; on the contrary, it is only too easy to be caught up in it and find it hard to escape its grip. The concept of free will is understandably misunderstood. The free will problem is a paradigm of philosophy because it can easily seem as if a part of common sense is riddled with confusion and error but on reflection is not—it is hard to see what lies before our eyes. Why should our concepts be so confusing, so liable to misunderstanding? What is wrong with us that we can’t gain a clear understanding of our own clear concepts?

 

  [1] This doesn’t imply the doctrine of determinism understood as the thesis that every event falls under strict laws.

Share

Ontological Commitment

                                                Ontological Commitment

 

 

Can there be a criterion of ontological commitment? Can there be a formal test of what a person is ontologically committed to? What a person is committed to is a matter of what he believes or assumes or presupposes or is prepared to act on—on his attitudes. So the question is whether there is a linguistic litmus test for an attitude of commitment. Can we read a person’s ontology off his verbal productions? Can I figure out my ontological commitments by inspecting my use of language?

            The first thing to observe is that the question is not restricted to matters of existence. As the term is commonly used “ontological commitment” is taken to refer to what a person takes to exist, so that it is interchangeable with “existential commitment”. That is certainly one form of commitment—what a person believes to exist—but it is not the only form. Consider “chromatic commitment”: what colors you believe things have (whether they exist or not). You may believe that things are colored and you may believe specific color claims—these are your chromatic ontological commitments. Ontology concerns what is so, and color is a matter of what is so. Roses are red and violets are blue—and Santa Klaus has a white beard and a red cloak (whether he exists or not). I might believe that colors are unreal and that nothing has them; in that case I am not ontologically committed with respect to color, though I might well believe in the existence of the things commonly said to be colored. Ontological commitment can concern any fact or putative fact: do you believe in that fact or not? Do you believe in moral facts, divine facts, facts about unobservable entities, psychological facts, and so on? Existence is just one kind of ontological commitment: we might say that it concerns one type of property, viz. the property of existence. Does anything have the property of existing? Which things do? Does anything have the property of being colored? Which things do? And so for any property you care to mention. A criterion for existential commitment might be a willingness to affirm “Such-and-such exists”, and a criterion for chromatic commitment might be a willingness to affirm “Such-and-such is red” (and similarly for other kinds of fact). It is artificial to single out existence from other sorts of ontological commitment: it is just one kind of factual commitment. The proper contrast here is with “epistemological commitment”: what we are committed to in the way of knowledge. What is it that we think we know? Do we think there is any knowledge, and if so what is known? We can be committed on questions of being (fact, reality) and we can be committed on questions of knowledge; what we are committed to existentially is just a special case of a more general question.

            The question of providing a criterion of ontological commitment is thus broader than that of providing a criterion of existential commitment. Quine announced, “To be is to be the value of a variable”; he has been paraphrased thus, “What you say there is, you say there is”. That is, you are committed to whatever your sentences mean: if you affirm a sentence that can be true only if certain things exist, then you committed to the existence of those things. For example, you can’t say, “There are numbers” and then turn round and deny there are numbers: you must be taken at your word. But it is the same with all forms of ontological commitment: if you say, “Roses are red” you can’t turn round and deny that roses are red (same for “good”, “solid”, “conscious”, and so on). To be committed to red things is to describe things as red. You are committed to such facts as your sayings require for their truth. The criterion of commitment is saying. You can’t disavow what you affirm: you can’t say it and then try to take it back. You can’t say it in practice but then disavow it theoretically. You can’t have your ontological cake and eat it. You can’t weasel out of your statements.

That sounds all very reasonable (indeed trivial—what was the fuss all about?), but actually it runs into difficulties as a formal test of ontological commitment. The idea was to provide a public formal test of ontological commitment, eschewing the vagaries of what a person internally believes. We might think of it as a behavioral criterion for a mental phenomenon: what a person is committed to (believes to be) is what he affirms in his public utterances. A person believes in unicorns if she affirms, “There are unicorns” or “Unicorns exist”. I determine what I believe in by examining what I say, and I might be surprised at what turns up (I may find that I accept, say, an ontology of events or possible worlds). Thus the criterion is formal and public: it invokes facts of language and it is interpersonally accessible. No need to delve into the inner recesses of a person’s mind.

            But the proposal is obviously problematic. It hardly provides a necessary condition, since you can keep silent about what you believe or may not have language at all; and it is not sufficient, since speech is not always sincere assertion. It is possible to say something and not believe what one says, as in play-acting or elocution practice. Even in assertion you may not be committed to what you assert in the sense that you believe what you say. A liar can’t use his assertions to figure out his ontological commitments. The assertion must be sincere, i.e. you must believe what you assert. But that is what we were seeking a criterion for–belief. Speech is never a sure guide to belief, so we can’t formulate a test of ontological commitment from facts about speech. My ontological commitments can be read off my sincere assertions—if I sincerely assert, “Snow is white”, then I am committed to snow being white—but the commitment comes from the belief not the assertion. No act of speech (or writing) can add up to belief, so there cannot be a formal linguistic criterion of ontological commitment. In order to find out what I am committed to you have to find out what I believe; what I say isn’t going to get you there. It may be true that what I say there is I say there is, but it doesn’t follow that that is what I believe there is. The most that can be claimed is that we have criterion for the ontological commitments of what someone says—a speech act is “committed” to what is required for its truth—but this is a far cry from the ontological commitments of a person. What I believe is not the same thing as what I say, since I may not give voice to my beliefs and, if I do, I may not mean what I say. My ontological commitments are fixed by my beliefs—but that is a trivial tautology not an illuminating criterion.

            There is also the case of a speaker actively rejecting the ontology of his sincere assertions. Suppose Meinong is right about definite descriptions—they really do denote non-existent subsistent objects. Then whenever Russell or anyone else makes a statement involving a definite description his speech act is committed to such objects: he accepts the truth of the uttered sentence and its truth requires non-existent objects (in the empty case). But Russell himself vehemently rejects such Meinongian objects—he doesn’t believe in them no matter what his utterances may entail. In this case ontological beliefs cannot be read off sincere assertion plus correct semantic analysis. The same is true for any type of statement: the speaker may reject what his sentences semantically entail. He is not committed to what his sentences are committed to, i.e. what is required for their truth. He may regard those sentences as logically defective and explicitly reject their entailments; they can’t force beliefs upon him. There is always a logical gap between language and belief, so cannot derive a criterion of ontological commitment from features of language. Perhaps non-linguistic action could supply such a criterion (think of animal ontological commitment), but what we say can never constitute what we believe. What I say there is may never be what I believe there is, and similarly for every other type of fact. Ontological commitment is a matter of private belief not public utterance.

 

Share