Perceptual Duality

                                                            Perceptual Duality

 

The traditional distinction between primary and secondary qualities has clear implications for the nature of perception. Primary qualities are possessed by objects independently of perceivers and do not owe their existence to perceivers: they are objective. Secondary qualities are dependent on perceivers, being projected by the mind onto objects that otherwise would lack them: they are subjective. Thus percepts have a subjective-objective duality: they are part subjective and part objective. For example, we see colors and shapes together, the former being contributed by the mind, the latter by the world outside the mind. These qualities become objects of perception by two different mechanisms: color qualities are manufactured by the mind and an operation of projection “spreads” them onto the physical world, while shape qualities impinge on the mind from outside and become perceptual contents by an operation of “imprinting” or “abstraction. Colors we impose on the world, while shapes impose themselves on us: from inside out and from outside in. Nevertheless the qualities interlock in a visual percept: we see color and shape simultaneously and side-by-side—we see colored shapes. Thus perceptual content is hybrid, double aspect, a mixture of opposites. We see both what comes from within and what comes from without. No doubt much of this is puzzling and problematic, even mysterious: how exactly does the projection operation work, and what is the process of abstraction? But it is commonly assumed that something like this dual picture must be true, given the prior distinction between primary and secondary qualities. Part of perception derives from the mind and part derives from the world, yet the two sources are miraculously combined into a unitary percept.

            Not everyone agrees with this bipartite picture. The unreconstructed naïve realist will insist that all the qualities manifest in perception reflect external conditions of objects—including colors, sounds, tastes, smells. Nothing comes from us; everything enters the mind from outside. This theorist will resist the idea that perception involves any sort of error, as that objects seem to have colors they don’t really have. Perception is simply openness to a world that exists whether we exist or not. Color is as objective as shape. On the other hand, a Kantian will maintain that the content of perception is wholly subjective—it’s all like color considered as a secondary quality. Shape qualities derive as much from the mind as color qualities. Thus there is no perceptual duality: subjectivity applies across the board. Both positions cleave to homogeneity of perceptual content: all objective or all subjective. But the position sketched above rejects this homogeneity, holding that perception is essentially a joining of disparate qualities. Indeed, it is typically assumed that the mind cannot of its own devices produce shape content, and also that the world cannot produce color content. We have to wait for the world to create shape percepts in us, but we can (and must) proceed on our own when it comes to color, the world being impotent to produce color percepts.

            The dual aspect picture suggests that our perceptual faculties first present a kind of bloodless sketch to us, which we then fill in by supplying qualities not present in it—like coloring in an outline. The primary qualities impinge on the mind and offer themselves for representation, but we have to complete the picture with appropriate secondary qualities. Of course, this process doesn’t involve any temporal separation between the initial sketch and the final coloring in, as if we had to wait for the colors to arrive, for we can make no sense of the idea of seeing shapes without accompanying colors (nor vice versa). But, logically speaking, there are two separate mental operations, of introjection and projection, respectively (maybe God can see objects without colors and can apply colors to objects at will). This is doubtless all very curious, but seemingly unavoidable, once we accept the picture of perception suggested by the traditional distinction. There must be some way in which subjective and objective qualities magically come together–coalesce, fuse– because we see both colors and shapes together, despite their being ontologically quite different. The inner and outer must be made to interlock in such a way that the unity of the percept is preserved.

            It seems to me that the situation is sufficiently problematic that we should seek an alternative. I take it that the subjectivity of color is undeniable, so I reject the type of naïve realism mentioned above; but the Kantian line is worth pursuing. So let’s try out the following idea: the shape qualities we perceive are also products of the mind, derived from its own inner resources, just like color qualities. There are serious problems with the notion that we obtain ideas of shape by abstraction from experience; and there are reasons to suppose that the shapes we see are not the shapes exemplified in the objective physical world. I won’t say much about these points because they are well known; my question is whether adopting the Kantian line eases the problem of perceptual content—what we might call “the combination problem”. So, by way of reminder: it has been widely contended that the geometry of the physical world does not fit the perceptual geometry we naturally bring to it (non-Euclidian versus Euclidian, roughly), so the geometrical figures we perceive are not identical to those existing in objective reality. This is easy to accept for other species: we don’t suppose that the visual experience of a mouse or a snake correctly captures the true geometry of the material world. The mind (brain) invents its own geometry, geared to its biological requirements, and the corresponding qualities may be as subjectively constituted as color qualities. Secondly, some of the primary qualities we perceive simply don’t belong to the objective material world, so they must be contributed by the mind–for example, solidity and smoothness. Objects don’t have the kind of continuous structure they appear to, and they don’t have smooth edges. The world is more granular than it appears. We attribute these qualities to things in virtue of the perceptual apparatus we possess, so the mind (brain) must be generating the corresponding qualities: we see things as solid and smooth, and hence ascribe those qualities to things. We can’t extract these qualities from objects, since they don’t have them, so we must get them from somewhere else, namely from our own resources. There are also the well-known arguments from illusion: when you see a penny as elliptical the property you see is not present in the object, so it can’t derive from that object; it must come from within. The shape you see is not the shape the penny objectively has. In addition, perceptions of color must of necessity also be perceptions of shape, and colors are endogenously derived, so it makes sense that both should have the same source (what about hallucinations of red cubes before any cubes have been seen?). There seems every reason to believe that our perception of shape issues from an internal system that contains shape representations without reliance on an outside stimulus; and every reason to suppose that the qualities involved do not coincide with (are not identical with) the actual geometry of the physical world. The color system and the shape system are essentially connected, with both projected onto the world in a single package.

            There are two things I am not saying here. One thing I am not saying is that the shape qualities we perceive are secondary qualities in the manner of color qualities, i.e. dispositions to cause experiences in perceivers. That looks like a false analysis in view of the non-relativity of attributions of shape. The qualities may be projected but they are not dispositional in this way (though they may be dispositional in other ways). The question of analysis is not the same as the question of origin or coincidence with objective traits of reality. Second, I am not saying that there is no systematic correspondence between perceived shape and objective shape. No doubt there is a close correspondence between them, given that perceivers have to live in a world of shaped objects and can’t afford to be completely wrong about their shapes. It is not that Euclidian and non-Euclidian geometry have nothing in common; in fact, they converge in many ways. The qualities we perceive act as surrogates for the objective traits of things, though the two are not identical. Just as colors correspond to wavelengths, so shapes (as perceived) correspond to the actual geometrical properties of things (which may come down to potentials in a force field). Objects certainly have edges, but the physical reality of edges need not coincide with edges as they are perceived.

            Does this position deserve to be described as idealist? Yes and no. No, in that it doesn’t reduce reality to purely mental reality (there is a non-mental external world out there); and no, in that the shape qualities are not to be conceived as themselves mental (unlike experiences of them). But yes, in that the world we immediately perceive is a world of our own devising: its constituents (sensible qualities) derive from within the mind and are not found in objective reality. The world we immediately perceive is a projected world, much as Kant supposed. There are in fact two worlds: the phenomenal world of perception and the objective world that lies on the other side of perception. We can say that objects have the qualities we perceive them to have, but this is so only by dint of the (mysterious) mental act of projection—just as they have colors. It is just that objects don’t have these qualities independently of the minds that project them (unlike the qualities attributed by physics). We can even stick to a version of naïve realism, since we see non-mental qualities that objects have (albeit derivatively), not mental sense-data that intervene between the mind and non-mental reality. The mind has access to qualities (a type of universal) that it projects onto the world, but these qualities are not aspects of the mind—they are not sensations.

            We should accordingly reject the old dichotomy of primary and secondary qualities in favor of a threefold distinction. There are the intrinsic properties of objective reality, the kind of thing talked about in physics (hopefully)—mass, charge, fields, etc. Then there are the properties that consist in dispositions to appear in certain ways—secondary qualities in the traditional sense. But third, there are properties like perceived shape that belong neither to objective reality nor to the class of dispositions to appear; these properties fit neither of the traditional categories. This third class is more closely interwoven with objective reality than the usual secondary qualities, which track nothing very significant in the external world (hence the non-relativity I mentioned earlier). The essential point is that perceived shape does not coincide with objective shape (and similarly for other so-called primary qualities like solidity, length, volume, etc.). The shapes (etc.) that we see are not the shapes that populate the objective universe. The class of properties traditionally counted as primary is thus more heterogeneous than was thought in the seventeenth century, when mechanism was still dominant and modern physics hadn’t questioned so much of common sense. It tended then to mean “any property of objects that is not secondary”. The possibility that the ontology of physics might be far removed from common perception was not really contemplated, so the assumption was that the way we see objects would figure in the correct science of those objects. It was Kant who began the movement to separate the phenomenal world from the objective world described by physics. The unexpected upshot is that we are now free to reject the double aspect theory of perceptual content, by regarding all such content as the product of the mind. We needn’t entertain the problematic idea of the subjective-objective amalgam.

            Let me try to make the position vivid by imagining a toy world. In this world the geometry of reality is some radically non-Euclidian geometry that is not even capable of being perceptually represented by the creatures living there—perhaps it has 10 dimensions. The creatures nevertheless need a way to perceive their environment, so they invent (or their genes invent) a perceptual geometry that is psychologically manageable and succeeds in tracking external reality well enough to survive. The qualities denoted in this system are not found in objective reality, though they characterize how that reality is perceived. In this world (perceived) shape and color both originate in the mind, so there is no combining of the subjective and the objective, the endogenous and the exogenous. Then that is how it is in our world: we have constructed perceptual representations that serve our purposes but which don’t coincide with, or derive from, the objective features of things.     

 

Co

Share

Pain and Unintelligent Design

 

 

 

Pain and Unintelligent Design

 

 

Pain is a very widespread biological adaptation. Pain receptors are everywhere in the animal world. Evidently pain serves the purposes of the genes—it enables survival. It is not just a by-product or holdover; it is specifically functional. To a first approximation we can say that pain serves the purpose of avoiding danger: it signals danger and it shapes behavior so as to avoid it. It hurts of course, and hurting is not good for the organism’s feeling of wellbeing: but that hurt is beneficial to the organism because it serves to keep it from injury and death. So the story goes: evolution equips us with the necessary evil of pain the better to enable our survival. We hurt in order to live.  If we didn’t hurt, we would die. People born without pain receptors are exceptionally prone to injury. So nature is not so cruel after all. Animals feel pain for their own good.

            But why is pain quite so bad? Why does it hurt so much? Is the degree of pain we observe really necessary for pain to perform its function? Suppose we [encountered alien creatures much like ourselves except that their pain threshold is much lower and their degree of pain much higher. If they stub their toe even slightly the pain is excruciating (equivalent to us having our toe hit hard with a hammer); their headaches are epic bouts of suffering; a mere graze has them screaming in agony. True, all this pain encourages them to be especially careful not to be injured, and it certainly aids their survival, but it all seems a bit excessive. Wouldn’t a lesser amount of pain serve the purpose just as well? And note that their extremes of pain are quite debilitating: they can’t go about their daily business with so much pain all the time.  If one of them stubs her toe she is laid off work for a week and confined to bed. Moreover, the pain tends to persist when the painful stimulus is removed: it hurts just as much after the graze has occurred. If these creatures were designed by some conscious being, we would say that the designer was an unintelligent designer. If the genes are the ones responsible, we would wonder what selective pressure could have allowed such extremes of pain. Their pain level is clearly surplus to requirements. But isn’t it much the same with us? I would be careful not to stub my toe even if I felt half the pain I feel now. The pain of a burn would make me avoid the flame even if it was much less fierce than it is now. And what precisely is the point of digestive pain or muscle pain? What do these things enable me to avoid? We get along quite well without pain receptors in the brain (or the hair, nails, and teeth enamel), so why not dispense with it for other organs too? Why does cancer cause so much pain? What good does that do? Why are we built to be susceptible to torture? Torture makes us do things against our wishes—it can be used coercively—so why build us to be susceptible to it? A warrior who can’t be tortured is a better warrior, surely. Why allow chronic pain that serves no discernible biological function? A more rational pain perception system would limit pain to those occasions on which it can serve its purpose of informing and avoiding, without overdoing it in the way it seems to. In a perfect world there would be no pain at all, just a perceptual system that alerts us non-painfully to danger; but granted that pain is a more effective deterrent, why not limit it to the real necessities? The negative side effects of severe pain surely outweigh its benefits. It seems like a case of unintelligent design.

            Yet pain evidently has a long and distinguished evolutionary history. It has been tried and tested over countless generations in millions of species. There is every reason to believe that pain receptors are as precisely calibrated as visual receptors. Just as the eye independently evolved in several lineages, so we can suppose that pain did (“convergent evolution”). It isn’t that pain only recently evolved in a single species and hasn’t yet worked out the kinks in its design (cf. bipedalism); pain is as old as flesh and bone. Plants don’t feel pain, but almost everything else does, above a certain level of biological complexity. There are no pain-free mammals. Can it be that mammalian pain is a kind of colossal biological blunder entailing much more suffering than is necessary for it to perform its function? So we have a puzzle—the puzzle of pain. On the one hand, the general level of pain seems excessive, with non-functional side effects; on the other hand, it is hard to believe that evolution would tolerate something so pointless. After all, pain uses energy, and evolution is miserly about energy. We can suppose that some organisms experience less pain than others (humans seem especially prone to it)—invertebrates less than vertebrates, say—so why not make all organisms function with a lower propensity for pain? Obviously, organisms can survive quite well without being quite so exquisitely sensitive to pain, so why not raise the threshold and reduce the intensity?

            Compare pleasure. Pleasure, like pain, is motivational, prompting organisms to engage not avoid. Food and sex are the obvious examples (defecation too, according to Freud). But the extremes of pleasure are never so intense as the extremes of pain: pain is really motivational, while pleasure can be taken or left. No one would rather die than forfeit an orgasm, but pain can make you want to die. Why the asymmetry? Pleasure motivates effectively enough without going sky-high, while excruciating pain is always moments away. Why not regulate pain to match pleasure? There is no need to make eating berries sheer ecstasy in order to get animals to eat berries, so why make being burnt sheer agony in order to get animals to avoid being burnt? Our pleasure system seems designed sensibly, moderately, non-hyperbolically, while our pain system goes way over the top. And yet that would make it biologically anomalous, a kind of freak accident. It’s like having grotesquely enlarged eyes when smaller eyes will do. Pleasure is a good thing biologically, but there is no need to overdo it; pain is also a good thing biologically (not otherwise), but there is no need to overdo it.

            I think this is a genuine puzzle with no obvious solution. How do we reconcile the efficiency and parsimony of evolution with the apparent extravagance of pain, as it currently exists? However, I can think of a possible resolution of the puzzle, which finds in pain a unique biological function, or one that is uniquely imperative. By way of analogy consider the following imaginary scenario. The local children have a predilection for playing over by the railway tracks, which feature a live electrical line guaranteed to cause death in anyone who touches it. There have been a number of fatalities recently and the parents are up in arms. There seems no way to prevent the children from straying over there—being grounded or conventionally punished is not enough of a deterrent. The no-nonsense headmaster of the local school comes up with an extreme idea: any child caught in the vicinity of the railway tracks will be given twenty lashes! This is certainly cruel and unusual punishment, but the dangers it is meant to deter are so extreme that the community decides it is the only way to save the children’s lives. In fact, several children, perhaps skeptical of the headmaster’s threats, have already received this extreme punishment, and as a result they sure as hell aren’t going over to the railway tracks any time soon. An outsider unfamiliar with the situation might suspect a sadistic headmaster and hysterical parents, but in fact this is the only way to prevent fatalities, as experience has shown. Someone might object: “Surely twenty lashes is too much! What about reducing it to ten or even five?” The answer given is that this is just too risky, given the very real dangers faced by the children; in fact, twenty lashes is the minimum that will ensure the desired result (child psychologists have studied it, etc.). Here we might reasonably conclude that the apparently excessive punishment is justified given the facts of the case—death by electrocution versus twenty lashes. The attractions of the railway tracks are simply that strong! We might compare it to talking out an insurance policy: if the results of a catastrophic storm are severe enough we may be willing to part with a lot of money to purchase an insurance policy. It may seem irrational to purchase the policy given its steep price and the improbability of a severe storm, but actually it makes sense because of the seriousness of the storm if it happens. Now suppose that the consequences of injury for an organism are severe indeed—maiming followed by certain death. There are no doctors to patch you up, just brutal nature to bring you down. A broken forelimb can and will result in certain death. It is then imperative to avoid breaking that forelimb, so if you feel it under dangerous stress you had better relieve that stress immediately. Just in case the animal doesn’t get the message the genes have taken out an insurance policy: make the pain so severe that the animal will always avoid the threatening stimulus. Strictly speaking, the severe pain is unnecessary to ensure the desired outcome, but just in case the genes ramp it up to excruciating levels. This is like the home insurer who thinks he should buy the policy just in case there is a storm; otherwise he might be ruined. Similarly, the genes take no chances and deliver a jolt of pain guaranteed to get the animal’s attention. It isn’t like the case of pleasure because not getting some particular pleasure will not automatically result in death, but being wounded generally will. That is, if injury and death are tightly correlated it makes sense to install pain receptors that operate to the max. No lazily leaving your hand in the flame as you snooze and suffering only mild discomfort: rather, deliver a jolt of pain guaranteed to make you withdraw your hand ASAP. Call this the insurance policy theory of pain: don’t take any chances where bodily injury is concerned–insure you are covered in case of catastrophe.  [1] If it hurts like hell, so be it—better to groan than to die. So the underlying reason for the excessiveness of pain is that biological entities are very prone to death from injury, even slight injury. If you could die from a mere graze, your genes would see to it that a graze really stings, so that you avoid grazes at all costs. Death spells non-survival for the genes, so they had better do everything in their power to keep their host organism from dying on them. The result is organisms that feel pain easily and intensely. If it turned out that those alien organisms I mentioned that suffer extreme levels of pain were also very prone to death from minor injury, we would begin to understand why things hurt so bad for them. In our own case, according to the insurance policy theory, evolution has designed our pain perception system to carefully track our risks in a perilous world. It isn’t just poor design and mindless stupidity that have made us so susceptible to pain in extreme forms; this is just the optimum way to keep as alive as bearers of those precious genes (in their eyes anyway). We inherit our pain receptors from our ancestors, and they lived in a far more dangerous world, in which even minor injuries can have fatal consequences. Those catastrophic storms came more often then.

            This puts the extremes of romantic suffering in a new light. It is understandable from a biological point of view why romantic rejection would feel bad, but why so bad? Why, in some cases, does it lead to suicide? Why is romantic suffering so uniquely awful?  [2] After all, there are other people out there who could serve as the vehicle of your genes—too many fish in the sea, etc. The reason is that we must be hyper-motivated in the case of romantic love because that’s the only way the genes can perpetuate themselves. Sexual attraction must be extreme, and that means that the pain of sexual rejection must be extreme too. Persistence is of the essence. If people felt pretty indifferent about it, it wouldn’t get done; and where would the genes be then? They would be stuck in a body without any means of escape into future generations. Therefore they ensure that the penalty for sexual and romantic rejection is lots of emotional pain; that way people will try to avoid it.  It is the same with separation: the reason lovers find separation so painful is that the genes have built them to stay together during the time of maximum reproductive potential. It may seem excessive—it is excessive—but it works as an insurance policy against reproductive failure. People don’t need to suffer that much from romantic rejection and separation, but making them suffer as they do is insurance against the catastrophe of non-reproduction. It is crucial biologically for reproduction to occur, so the genes make sure that whatever interferes with that causes a lot of suffering. This is why there is a great deal of pleasure in love, but also a great deal of pain–more than seems strictly necessary to get the job done. The pain involved in the loss of children is similar: it acts as a deterrent to neglecting one’s children and thus terminating the genetic line. Emotional excess functions as an insurance policy about a biologically crucial event. Extreme pain is thus not so much maladaptive as hyper-adaptive: it works to ensure that appropriate steps are taken when the going gets tough, no matter how awful for the sufferer. It may be, then, that the amount of pain an animal suffers is precisely the right amount all things considered, even though it seems surplus to requirements (and nasty in itself). So at least the insurance policy theory maintains, and it must be admitted that accusing evolution of gratuitous pain production would be uncharitable to evolution.   

            To the sufferer pain seems excessive, a gratuitous infliction, far beyond what is necessary to promote survival; but from the point of view of the genes it is simply an effective way to optimize performance in the game of survival. It may hurt us a lot, but it does them a favor. It keeps us on our toes. Still, it is puzzling that it hurts quiteas much as it does.  [3]

 

  [1] We can compare the insurance policy theory of excessive pain to the arms race theory of excessive biological weaponry: they may seem pointless and counterproductive but they result from the inner logic of evolution as a mindless process driven by gene wars. Biological exaggeration can occur when the genes are fighting for survival and are not too concerned about the welfare of their hosts.

  [2] Romeo and Juliet are the obvious example, but the case of Marianne Dashwood in Jane Austen’s Sense and Sensibility is a study in romantic suffering—so extreme, so pointless.

  [3] In this paper I simply assume the gene-centered view of evolution and biology, with ample use of associated metaphor. I intend no biological reductionism, just biological realism.

Share

Our Concept of Mind

                                               

 

 

Our Concept of Mind

 

 

How good is our concept of mind—how extensive, how accurate, how penetrating? I shall suggest that it is not very good—limited, misleading, shallow. It is much less good than our concept of body. It covers mental reality only ineptly, incompetently. There are three areas to consider: alien minds, other minds, and our own mind. In each of these areas our concept of mind runs into trouble.

            Consider minds very different from our own: not just bats and dolphins that have different senses from our own but animals in general. I don’t know what it is like to be a bat but I am also pretty clueless about what it is like to be a cat or a dog or a mouse. It isn’t that they have phenomenological types of experience I don’t have; rather, the way the different elements of their mind come together baffles me—their desires, thoughts, and emotions (their “form of life”). To be sure, we have some understanding, but we find their inner life enigmatic, just not close enough to our own for full empathy. This is why we think it would be extremely interesting to become a cat for a while and see the world through cat’s eyes (and ears and nose). Similarly for bees and sharks, eagles and elephants. Our concept of mind fails to reveal the inner lives of other animals, even our closest relatives like apes (though we surely have more insight into their minds than we do reptiles). And we don’t believe that further diligent inquiry will resolve the enigma. By contrast, there is no such limitation in our concept of body: the bodies of other animals are not enigmatic to us (I don’t mean their bodies as lived by the animal in question, which is an aspect of their mind). We have a perfectly clear grasp of the anatomy and physiology of the bat or cat or elephant—as clear as our grasp of the human body. Our concept of body extends smoothly to alien bodies, while our concept of mind falters when it comes to alien minds—we can’t get our minds around theirs. They present themselves to us as areas of ignorance, impenetrability. That is, our cognitive resources in conceiving of mental reality are inadequate to capture the (full) nature of alien minds—which is why we call them alien. It is not as if when confronted by other species we just cheerfully assume that we know just what is going on internally, as someone with more capacious conceptual resources might. We don’t look into the eyes of a cat and feel we know just what she is thinking and feeling—what her feline point of view is. Thus our concept of mind, unlike our concept of body, is anthropocentric, geared to the human, incapable of affording (full) access to the inner lives of other species. It remains partial and glancing, skewed to our specific mode of sensibility. It would not be surprising to discover that we really have no idea what is on the minds of other species of animal; we might be amazed by what we experience if we suddenly became a mouse for a day. With other humans we think we know where we are—where they are—and so we don’t wonder what it’s like to be another human. But as soon as a mind begins to be unlike our own mind we start to lose our grip on it, as we don’t for bodies unlike our own body. Our concept of mind is thus confined and parochial, failing to capture the full extent of mental reality. There is a lot it doesn’t encompass. To put it differently, our concept of mind exhibits cognitive bias, even to the point of cognitive closure in some instances.  

            But even in the case of our fellow humans our concept of mind betrays its fragility. For we have difficulty understanding what it is for other people to have a mind, even one just like ours. What is it that I think when I think that someone not myself is in pain? Wittgenstein has a famous passage about this: “If one has to imagine someone else’s pain on the model of one’s own, this is none too easy a thing to do: for I have to imagine pain which I do not feel on the model of the pain which I do feel. That is, what I have to do is not simply to make a transition in imagination from one place of pain to another. As, from pain in the hand to pain in the arm. For I am not to imagine that I feel pain in some region of his body. (Which would also be possible.)” (Philosophical Investigations, 302) But don’t we conceive of another’s pain precisely by reference to our own? He has what I have when I am in pain except that he isn’t me. Compare: for him to have a self is for him to have what I have when I have a self except that it isn’t my self. I don’t think this way about his body—I don’t think that he has what I have when I have a body except that it isn’t my body. Instead, we have a general notion of body that we apply both to ourselves and to others, without privileging our own body. In the case of mind, however, we start from ourselves and project outward: but, as Wittgenstein observes, there is a question about why this isn’t just conceiving my mind in his body, which is not at all the same thing as his having a mind of his own. Do I really grasp what it is for him to be in pain, as opposed to myself feeling pain in another body? Do I just rely on misplaced projection, failing to grasp the full reality of another mind? Isn’t our concept of other minds irredeemably egocentric (solipsistic)? Think about it: do you really understand what it is for another person to be in pain in just the way you are in pain (but without his being you)? Aren’t you always putting yourself in the other’s place? Again, it is not like the body: here we really do understand the idea of another body, not merely a projection of one’s own body elsewhere, as if the other’s body is somehow an extension of mine. Our concept of other minds seems hazy, confused, not fully up to the job assigned to it—representing the mental reality of others. Our own mind exercises too powerful a hold over our psychological thinking—as it does for alien minds. I can’t abstract my concept of mind away from my own species, and I can’t abstract it away from my own self either—my concept keeps pulling me back to my own mind, refusing to extend to minds beyond my own. To be sure, I have a rough and ready concept of other minds, as I do alien minds, but the concept is inept, incomplete, sketchy, jejune. One might even say that it is childish (and of course young children have a notoriously impoverished understanding of the minds of others). It seems not to have escaped its roots in our own self-representation. To put it simply, we don’t really understand what it is to be another person (self, consciousness). We operate with a patched-up cobbled-together concept based loosely on what we know of ourselves.

            But then isn’t own self-concept entirely satisfactory? Don’t we at least understand ourselves, i.e. what it is for us to have a mental state? Surely I know what it is for me to be in pain! This is tricky territory, but let me offer the following remarks. First, there is the mind-body problem: do my concepts of my own mind enable me to grasp how that mind relates intelligibly to my body? Clearly not, so we have reason to suppose that our concept does not disclose the full reality of what it covers: there is much more to my mind than my concepts reveal (or can reveal). But second, and less familiar, we don’t really have a clear conception of how mental reality in our own case fits into the broader world around us. We don’t clearly see our place in the causal nexus. On the one hand, there are physical objects around us, including our own body, and there is an objective conception of these objects (encapsulated in physics); on the other, there are our inner subjective states that we conceive in a different way entirely. But we can’t integrate these two realms, these two viewpoints. Here is a simple way to put it: while I can observe causal relations between physical objects, I cannot observe causal relations between the mental and physical. Note the word “observe” here: true, I know of such causal relations, but I don’t observe them with my senses. I never see a wound causing me to feel pain, simply because I don’t see my pains. What I have is a kind of mongrel conception of the psychophysical nexus—a bringing together of the objective and the subjective. But the real world must be a unified world, i.e. one in which both mental and physical seamlessly coexist. This means that it must be possible in principle (if not for us in practice) to attain an objective conception of mental reality, so that our limited perspective on our own minds gives way to something more universal (an “absolute conception” in the well-worn phrase). Our present mental concepts fail to provide this detached point of view, because they depict our minds from the point of view of those minds—I describe my mental states as they strike me from the first-person point of view, not as they fit into the broader reality of which they are a part.  [1] Indeed, it is hard for me to think of them as part of reality at all (and easy for me to think of reality as part of my mind): I think of the world as my world, not of my mind as just an element in a far broader totality. It is a strain for me even to acknowledge that my mind is merely one ephemeral speck in a vastly more extensive reality.

            Again, I have no great trouble seeing my body in this way, distressingly so. I see its causal relations to other things and I conceive of it as part of a totality of other physical objects—just one object among others equally real. But I don’t think of my mind in this modest and self-effacing way—I don’t think of it as just another thing in a vast array of things. I think of myself as a throbbing center not as a dot among other dots. My concept of my mind forces me to conceive of it in ways that fail to do justice to its limited and contingent place in the natural order, which is why I find it so difficult to think of myself in this way—and why whole systems of thought have arisen that deny the located and confined nature of mind. In a sense, our concept of mind exaggerates the importance of our own mind. The digestive system we can see for what it is biologically, but the mind resists this kind of demotion, this embedding in the natural biological order. We cannot view ourselves sub specie aeternitatis. And what point would there be in equipping us with such fancy conceptual equipment? Does natural selection care that we can’t represent ourselves form a God’s-eye perspective? Can other animals think of themselves thus objectively? Our concept of mind leaves us in no doubt that our minds are real, but it fails to inform us of how this reality fits into reality as a whole (materialism and idealism try to fill the gap). We are aware that we fit into the natural order, but we have no clear conception of how this fitting occurs, beyond some sketchy ideas of causal connection.

            Mental reality is distributed quite widely in the universe—in oneself, in other humans, and in other species—but our concept of mind fails to encompass this reality. Moreover, the concept fails to locate one’s own mental reality in a broader reality. By contrast, our concept of body doesn’t suffer from these limitations. Thus our concept of mind is limited and defective in important respects. It is not a very good concept. Maybe it could be improved, but as it stands it is rather crude.  [2]

 

  [1] Anyone familiar with the work of Thomas Nagel will recognize these kinds of considerations.

  [2] If we inquire what the biological purpose of the concept of mind is, the following answer seems on the right lines: to inform us of our own mental state, and to enable us to predict the behavior of others. Fulfilling these two purposes requires little in the way of adequate representation of the full nature of mental states. What would knowing what it’s like to be a bat do for us biologically? It’s not as if we have to mate with them! Nature minimizes knowledge where there is no need of it.

Share

Origins of the Free Will Problem

                                   

 

 

 

Origins of the Free Will Problem

 

 

In its modern form the problem of free will is supposed to arise from the scientific discovery (or perhaps scientific presupposition) that determinism is true. It is a tenet of modern science (at least of the Newtonian kind) that every event in nature is the result of universal laws that allow of no exceptions, and this uniformity is supposed to rule out the existence of free acts. If so, the absence of free will is an empirical discovery, because it is an empirical discovery that determinism is true. If physics had turned out differently, we would not have had a reason to deny the existence of free will. In earlier times the threat to free will came from theology in the form of God’s omniscience: if God has complete foreknowledge, then he knows everything a human agent will do; but then there is no free will. Again, this reason to deny free will issues from considerations extrinsic to the concept of free will, in certain facts about God’s nature. If theology had been different, there would not have been a reason to deny free will. We can also imagine another sort of empirical argument for denying free will, viz. that the unconscious, as conceived by Freud, is seething with passions that compel us to act as we do, so that nothing we do is free from such internal coercion. According to this argument, we are forced to act as we do by an unconscious agency that intrudes on our conscious deliberations, robbing us of our freedom. We have the illusion that we are free, as we do under the scientific and theological arguments just mentioned, but in fact we are not—and this is a scientific discovery of psychoanalysis.

            Now I am not concerned here with whether these arguments are sound, or with whether their premises are true—universal determinism, divine foreknowledge, and the all-powerful unconscious. I mention them in order to distinguish them from another type of argument against the possibility of free will, namely that it is inherent in the concept of free will that we are not free. This is what might be called an intrinsic argument against free will, one that stems from the concept itself not from ancillary considerations of an empirical or factual nature. I think this is the more important argument philosophically, but again that is not my primary concern; I wish merely to distinguish the two sorts of argument, as well as to assess the cogency of the intrinsic argument. But first I want to articulate that argument so as to bring out its structure. I also want to note how extraordinary it would be if such an argument were successful.

            There are two components to the concept of free will as we have it, singly necessary and jointly sufficient. The first is that free actions are in some way responsive to desires and other psychological states: we act as we do because of our desires (etc.). There are many ways this responsiveness relation can be characterized but let me simply say that actions are determined by desires, where this does not imply the doctrine of determinism—given the desire (etc.) the action follows.  [1] This property is enshrined in the following proposition: if two individuals are exactly alike in their psychological states, they must act in the same way. An action is not free if it violates this principle, since then it would just be random in relation to desire—as when two psychologically identical individuals with a desire for ice cream act in the one case by buying an ice cream and in the other by tying their shoelaces (say as a result of a nervous spasm). This component of the concept captures the necessary condition that an act is free only if it is in accordance with the agent’s wishes. Thus we could say that the concept of freedom includes “desire-determination”. The second component is that a free action is one that has alternatives: the agent did a certain thing but he could have done otherwise. He had a genuine choice; his particular course of action was not forced on him. He did not act under duress, being given no alternative to what he did. I went to the movies but I could have stayed home and watched TV; no one made me go to the movies against my will. I wasn’t given an offer I couldn’t refuse, either by man or nature. This also is a necessary condition of freedom—that I had alternatives. Call this the “alternatives requirement”. Then we could plausibly claim that desire-determination and the alternatives requirement are individually necessary and jointly sufficient for free action.

            Now we know how the anti-freedom argument will go: these two components are said to be inconsistent with each other. The concept of freedom is contradictory because it combines an insistence on determination with an equally strong insistence that the agent could have acted otherwise. But if the act was determined (fixed, caused by, controlled by) a desire, then it did not admit of alternatives–contrary to the second component. Let us pause to take in how remarkable that conclusion is: we are familiar with philosophical arguments that purport to show that certain concepts harbor hidden contradictions (truth, vague concepts, the concept of a set), but it is another thing to contend that a familiar concept involves a direct contradiction in its very definition, as plain as the nose on your face. Somehow human beings have fashioned a concept that is manifestly contradictory, and yet they have failed to notice this fact. Such stupidity! Such insanity! I mean, what the…  The argument is telling us that our very conceptual scheme, not something arising from an extrinsic fact of physics or theology or psychology, contains a blatant, glaring, and embarrassing logical screw-up. It’s as if we had and used the concept of a “smountain”, where something is a smountain if and only if it is both a very large heap of rocks and also a very small heap of rocks. According to the standard argument, free actions (so-called) involve both determination and the absence of determination: but nothing could ever satisfy those contradictory conditions. Therefore the will is not free and no one ever acts freely.

            As I said, this is an extraordinary result: common sense is grievously contradictory. And on a very important matter too, since moral responsibility hinges on the possibility of free will—you would have thought we would be more careful about making our concepts coherent! We condemn people to severe punishment relying on a concept that is transparently contradictory. This is appalling: but it is good that philosophy exists to expose the conceptual bankruptcy of the whole thing. And the trouble does not stem, forgivably, from an empirical discovery that casts freedom into doubt, or from arcane theology, but from the very concepts we employ every day. This must surely be cause for human shame, wringing of hands, reparations, etc. Thousands of years of obvious conceptual confusion!

            Or perhaps the argument is wrong. Some have contended that one or the other component of the concept is dispensable: desires don’t in any way determine actions, or they do but there is no need for the existence of alternatives. Others have contended that both conditions are necessary but are really compatible upon deeper analysis. I am with these contenders, these reconcilers, but I won’t enter into a detailed defense of that position here. What I will do is offer some sketchy remarks about the notion of being able to act otherwise that I hope will ease the pressure to suppose inconsistency in the concept.

            The first thing to understand is that “I could have done otherwise” does not mean that it is metaphysically possible for two agents to be exactly alike psychologically and yet act differently, still less that they could be exactly alike physically and yet act differently. I have no such thoughts when I reflect that I have alternatives. If I survey my options for what I will do this afternoon—go to the movies, stay home and watch TV, practice my golf swing—I am not contemplating all the ways I can go against my desires and other psychological states; I am enumerating my desires and wondering which one is the most important to me today. I am reviewing my possible choices. If someone forced me to select one of the options, then I would have no choice; but if no one does, then I do have a choice—I have alternatives. This is not a matter of some remarkable kind of metaphysical modality but merely an expression of the fact that it is my desires that count not something alien and inimical to them. The paradigm freedom-destroying agency is someone forcing you to do what you have no desire to do. Freedom is acting on your desires not on someone else’s desires under conditions of duress. But in addition there are many other kinds of freedom-destroyer: not just real threats but perceived threats, internal compulsions like phobias and obsessions, unconscious biases and motivations, motor dysfunctions, insanity, brain washing, hypnosis, epilepsy, cowardice, etc. All these can interfere with the agent’s considered judgment about what it would be best to do.

The concept of being able to do otherwise is a portmanteau concept, collecting together disparate conditions and causal factors. It is imprecise and context-dependent. To say that someone acted freely is to rule out any of an open-ended list of possible disruptive factors; and there will be questions of degree here—how much the agent’s freedom was compromised. But there is no suggestion that universal determinism excludes freedom or divine foreknowledge: when I think of myself as free to act in a certain way I don’t think of myself as unpredictable by God or as hovering above the web of causal laws that govern the universe. I have much humbler matters on my mind. The mistake is to interpret a practical portmanteau concept as a unitary metaphysical concept. To be free is not to escape the causal net or God’s all-seeing eye but to act as you see fit—to do what you want when you want. Neither God nor Newton can take that away from you.

            That is a familiar compatibilist line and I don’t propose to elaborate and defend it further. My main point is that the concept of freedom is not inherently contradictory because it does not imply anything contrary to desire-determination. So our conceptual scheme as it relates to human behavior is not riddled with logical error. The two components of the concept of freedom are not in tension with each other but complement each other. We misunderstood the “grammar” of “I could have done otherwise”, as Wittgenstein would say; and indeed it does have the appearance of a modal claim analogous to “the particle could have gone in a different direction”. But it occurs in its own “language game” and carries no metaphysical punch: that is, its actual meaning is given by the range of things that rule out freedom as ordinarily conceived. But this is consistent with allowing that some extrinsic considerations could rule freedom out: we might not be as free as we fondly suppose. Determinism and divine foreknowledge don’t do this, as compatibilists have long urged, but we can imagine other types of threat, as with the Freudian argument mentioned earlier. Suppose it turned out that human action is largely or wholly governed by unconscious passions that we have no control over; thus I am subject to compulsions of which I have no conscious knowledge. Then it would be plausible to maintain that my actions are not free, or not as free as I thought: I thought sending a birthday card to my father was a free act of kindness, but actually I had a hidden motive that made me select a card that would hurt him deeply; or I play tennis with him not in order to enjoy a game together but to relish (unconsciously) the opportunity of beating him. This really would undermine my freedom, because it would sharply limit the desires I can act on: in the end I am always compelled to act on patricidal desires stemming from my unresolved Oedipus complex, not from other desires I might have. It is as if my unconscious operates as an external agency bending me to its will—I am a puppet not a puppeteer. That would count as a scientific discovery that undermines free will, and it does so within the terms stipulated by the concept. Here incompatibilism would be indisputable. So it is not that the concept of freedom is necessarily immune from skeptical attack—despite being internally perfectly coherent. But the attack has to be of the right form; in particular, it must cast doubt on the idea of genuine alternatives that an attribution of freedom requires. I can undermine a specific attribution of freedom by pointing to external interference or to inner compulsion; well, I could also do this more globally, even to the point of contesting any attribution of freedom. We are not guaranteedto be free. And this is a matter of common sense not sophisticated science or theology or metaphysics. However, and fortunately, no such empirical threat actually exists, since the Freudian picture is not to be believed on empirical grounds—and even Freud didn’t contend that human acts are never free because of the malign effects of the unconscious. The same goes for the idea that we are all so massively brain-washed that none of our actions correspond to our own authentic desires: we really desire such and such, but because of propaganda we never do such and such, instead doing what we don’t really desire to do. In principle that could be so, in which case our freedom would be limited to non-existent, merely an illusion of freedom (but do we really never want to eat?). However, there is no good reason to suppose that this is the situation in which we find ourselves: we actually do have real desires that we act upon without impediment, external or internal. So we are free (maybe some people more than others).

            Potential threats to freedom can arise from various sources, some more persuasive than others. Philosophically the most interesting potential threat arises (supposedly) from the content of the concept itself, which does not depend on contingent facts of the universe. But (a) it is extremely unlikely that so deeply embedded and universal a concept could harbor the kind of contradiction some philosophers have suspected, and (b) it turns out that the alleged contradiction can be defused by paying careful attention to the actual way the concept of alternatives is employed. Still, it is somewhat surprising that the concept should be as problematic as it has proved to be—so prone to misunderstanding. It is not as if the anti-free will argument immediately strikes us as absurd; on the contrary, it is only too easy to be caught up in it and find it hard to escape its grip. The concept of free will is understandably misunderstood. The free will problem is a paradigm of philosophy because it can easily seem as if a part of common sense is riddled with confusion and error but on reflection is not—it is hard to see what lies before our eyes. Why should our concepts be so confusing, so liable to misunderstanding? What is wrong with us that we can’t gain a clear understanding of our own clear concepts?

 

  [1] This doesn’t imply the doctrine of determinism understood as the thesis that every event falls under strict laws.

Share

Ontological Commitment

                                                Ontological Commitment

 

 

Can there be a criterion of ontological commitment? Can there be a formal test of what a person is ontologically committed to? What a person is committed to is a matter of what he believes or assumes or presupposes or is prepared to act on—on his attitudes. So the question is whether there is a linguistic litmus test for an attitude of commitment. Can we read a person’s ontology off his verbal productions? Can I figure out my ontological commitments by inspecting my use of language?

            The first thing to observe is that the question is not restricted to matters of existence. As the term is commonly used “ontological commitment” is taken to refer to what a person takes to exist, so that it is interchangeable with “existential commitment”. That is certainly one form of commitment—what a person believes to exist—but it is not the only form. Consider “chromatic commitment”: what colors you believe things have (whether they exist or not). You may believe that things are colored and you may believe specific color claims—these are your chromatic ontological commitments. Ontology concerns what is so, and color is a matter of what is so. Roses are red and violets are blue—and Santa Klaus has a white beard and a red cloak (whether he exists or not). I might believe that colors are unreal and that nothing has them; in that case I am not ontologically committed with respect to color, though I might well believe in the existence of the things commonly said to be colored. Ontological commitment can concern any fact or putative fact: do you believe in that fact or not? Do you believe in moral facts, divine facts, facts about unobservable entities, psychological facts, and so on? Existence is just one kind of ontological commitment: we might say that it concerns one type of property, viz. the property of existence. Does anything have the property of existing? Which things do? Does anything have the property of being colored? Which things do? And so for any property you care to mention. A criterion for existential commitment might be a willingness to affirm “Such-and-such exists”, and a criterion for chromatic commitment might be a willingness to affirm “Such-and-such is red” (and similarly for other kinds of fact). It is artificial to single out existence from other sorts of ontological commitment: it is just one kind of factual commitment. The proper contrast here is with “epistemological commitment”: what we are committed to in the way of knowledge. What is it that we think we know? Do we think there is any knowledge, and if so what is known? We can be committed on questions of being (fact, reality) and we can be committed on questions of knowledge; what we are committed to existentially is just a special case of a more general question.

            The question of providing a criterion of ontological commitment is thus broader than that of providing a criterion of existential commitment. Quine announced, “To be is to be the value of a variable”; he has been paraphrased thus, “What you say there is, you say there is”. That is, you are committed to whatever your sentences mean: if you affirm a sentence that can be true only if certain things exist, then you committed to the existence of those things. For example, you can’t say, “There are numbers” and then turn round and deny there are numbers: you must be taken at your word. But it is the same with all forms of ontological commitment: if you say, “Roses are red” you can’t turn round and deny that roses are red (same for “good”, “solid”, “conscious”, and so on). To be committed to red things is to describe things as red. You are committed to such facts as your sayings require for their truth. The criterion of commitment is saying. You can’t disavow what you affirm: you can’t say it and then try to take it back. You can’t say it in practice but then disavow it theoretically. You can’t have your ontological cake and eat it. You can’t weasel out of your statements.

That sounds all very reasonable (indeed trivial—what was the fuss all about?), but actually it runs into difficulties as a formal test of ontological commitment. The idea was to provide a public formal test of ontological commitment, eschewing the vagaries of what a person internally believes. We might think of it as a behavioral criterion for a mental phenomenon: what a person is committed to (believes to be) is what he affirms in his public utterances. A person believes in unicorns if she affirms, “There are unicorns” or “Unicorns exist”. I determine what I believe in by examining what I say, and I might be surprised at what turns up (I may find that I accept, say, an ontology of events or possible worlds). Thus the criterion is formal and public: it invokes facts of language and it is interpersonally accessible. No need to delve into the inner recesses of a person’s mind.

            But the proposal is obviously problematic. It hardly provides a necessary condition, since you can keep silent about what you believe or may not have language at all; and it is not sufficient, since speech is not always sincere assertion. It is possible to say something and not believe what one says, as in play-acting or elocution practice. Even in assertion you may not be committed to what you assert in the sense that you believe what you say. A liar can’t use his assertions to figure out his ontological commitments. The assertion must be sincere, i.e. you must believe what you assert. But that is what we were seeking a criterion for–belief. Speech is never a sure guide to belief, so we can’t formulate a test of ontological commitment from facts about speech. My ontological commitments can be read off my sincere assertions—if I sincerely assert, “Snow is white”, then I am committed to snow being white—but the commitment comes from the belief not the assertion. No act of speech (or writing) can add up to belief, so there cannot be a formal linguistic criterion of ontological commitment. In order to find out what I am committed to you have to find out what I believe; what I say isn’t going to get you there. It may be true that what I say there is I say there is, but it doesn’t follow that that is what I believe there is. The most that can be claimed is that we have criterion for the ontological commitments of what someone says—a speech act is “committed” to what is required for its truth—but this is a far cry from the ontological commitments of a person. What I believe is not the same thing as what I say, since I may not give voice to my beliefs and, if I do, I may not mean what I say. My ontological commitments are fixed by my beliefs—but that is a trivial tautology not an illuminating criterion.

            There is also the case of a speaker actively rejecting the ontology of his sincere assertions. Suppose Meinong is right about definite descriptions—they really do denote non-existent subsistent objects. Then whenever Russell or anyone else makes a statement involving a definite description his speech act is committed to such objects: he accepts the truth of the uttered sentence and its truth requires non-existent objects (in the empty case). But Russell himself vehemently rejects such Meinongian objects—he doesn’t believe in them no matter what his utterances may entail. In this case ontological beliefs cannot be read off sincere assertion plus correct semantic analysis. The same is true for any type of statement: the speaker may reject what his sentences semantically entail. He is not committed to what his sentences are committed to, i.e. what is required for their truth. He may regard those sentences as logically defective and explicitly reject their entailments; they can’t force beliefs upon him. There is always a logical gap between language and belief, so cannot derive a criterion of ontological commitment from features of language. Perhaps non-linguistic action could supply such a criterion (think of animal ontological commitment), but what we say can never constitute what we believe. What I say there is may never be what I believe there is, and similarly for every other type of fact. Ontological commitment is a matter of private belief not public utterance.

 

Share

Observation and Scientific Realism

                                   

 

 

Observation and Scientific Realism

 

 

Positivism, following empiricism, maintained that the real is coterminous with the observable. A scientific theory that posits unobservable entities cannot be taken at face value, but must be regarded as merely instrumentally useful or as plain false. The observable entities are real enough, but the unobservable ones are a species of fiction (no one has observed Sherlock Holmes). There is some irony in this position for a reason strangely neglected: observations are not observable. They ought therefore to be unreal according to the criterion of reality advocated by positivism. An observation is a perceptual occurrence—an impression, as Hume would say—but such occurrences are not themselves perceptible by the senses. No one can see what I experience when I make an observation. You might say that I at least can observe my observation, but that doesn’t make a dent in the underlying point: first, I don’t observe my observation—I merely know about it by introspection; second, such knowledge is private to me and not available to you—you don’t have this kind of knowledge of what I experience. Observation lacks inter-subjectivity in the sense of being a publicly observable occurrence: it is a private occurrence occurring in an individual mind. So the positivists are making something a test of reality that lacks the marks of the real by their own standards.

            Observation is a human achievement: what we can observe is constrained by the acuity of our senses, our position in space and time, and our powers of discernment. Is the sun observable? Most of the time it is not, since we can’t stare at it without damaging our eyes. Is it thereby unreal at those times? Does it become real when we don dark glasses? Does a train become unreal as we watch it vanishing into the distance? Of course not: these are just facts about the limits of the human senses. Why would the human senses, confined as they are, impose any limits on what is real? What can you observe with your ears? Sounds, certainly, but can you observe the objects that make these sounds? Not with your ears, but does that mean such objects are not real? Would anyone suppose that the nose is the arbiter of reality? Observation always seems to mean visual observation, but that too is hardly suited to qualify as the measure of reality. The fact is that human vision, like the other senses, is limited in acuity, stimulus bound, prone to illusion, subjectively shaped, modular, and species specific. Why should reality be circumscribed by what is so constituted? Vision may provide our best evidence for what is real, but it can’t be what determinessomething as real; it can’t be the definition of the real.

            And what exactly is observation anyway? The OED defines “observe” as “watch attentively”. Note the restriction to the visual sense (we can’t watch with our ears or nose), but the addition of “attentively” is also important. Observing is not simply seeing or even looking; it is doing so while attending to what one is seeing. If you are not attending, or have no power of attention, you cannot be observing. So the positivists must maintain that the test of reality is whether something can be attended to (by humans). But attention is limited, sporadic, and labile—unlike reality. Also attention is more top-down and cognitive than mere seeing: we attend to what we deem important, what we are interested in, what enthralls us. Desire drives attention not merely the perceptual stimulus. Is reality dependent on human desire? Does it care about what interests us? It is a psychological fact about an observer that he or she is watching something attentively; that has nothing to do with whether reality contains a certain type of entity. Is observation necessarily bound up with consciousness? The notion of unconscious observation does not seem oxymoronic, and psychologists have found evidence that it occurs (subliminal perception experiments). Couldn’t a person with blindsight make observations? Then the positivist would have to agree that things exist that cannot be consciously observed. How well does that sit with the empiricism animating their position? Doesn’t it show what a frail reed observation is as a foundation for existence? What if scientists all had blindsight and never consciously observed anything—would the positivist still say that reality is fixed by what they can unconsciously observe? Isn’t that just silly? Why should reality be beholden to the visual system of a certain species with limited powers of sight? Why confuse psychology with physics?

            This kind of scientific anti-realism looks completely hopeless.  [1] The point I want to make is that other types of anti-realism are not open to the kinds of objection I have just raised (though they may be implausible on other grounds). For it is glaringly obvious that the existence of unobservable entities is a discovery of science. The microscope, the telescope, and the diffraction chamber greatly expanded our knowledge of the full inventory of the world: microorganisms, remote celestial objects, and invisible atoms (and their parts) became part of accepted reality. Science has discovered that there is more to the world than the unaided human senses can reveal. Moreover, the human senses are themselves objects in the world, with built in limitations, biases, and breakdowns. The picture of the world as extending beyond the reach of the senses is an achievement of science itself. According to science, then, scientific realism is true—at least in the sense that reality is not coterminous (or coeval) with what is (directly) observable by means of the senses. We exist as limited creatures in a larger world not of our own making and containing many things not evident to our senses—things we can’t perceive simply by opening our eyes and looking. But none of this can be said about other areas in which realism and anti-realism have been debated. It is not a theme of morality that moral realism is true. It is not a theorem of mathematics that platonic realism is true. It is not a thesis of psychology that psychological realism is true. It is not a commitment of our ordinary conception of the physical world that idealism is false (Berkeley was right about this, Doctor Johnson was wrong). It isn’t that realism in these areas isn’t a fact about them; rather, realism isn’t a proposition asserted by these areas. Morality may have discovered that slavery is wrong or that animals have rights, but it is not a discovery of morality that moral truths are objectively true independently of human desire or thought. That is a discovery (if it is one) of philosophy. Likewise, the common sense view of the so-called material world is compatible with various kinds of anti-realism about it, which is why we can’t refute idealism by pointing to what common sense has established (or science). Maybe realism is by far the best interpretation of these areas, but it isn’t that they themselves assert its truth. By contrast, we can say that science itself has established that (unaided) observation does not encompass all existing entities. In this sense, realism is internal to science—but external to the other areas mentioned.

            The interest of this point is less that science has established scientific realism (in the limited sense defined)—for that seems obvious enough—but rather that other kinds of realism cannot be demonstrated in the same way. It would be bizarre to suggest that morality itself has established that moral values are objective and hence “queer”—as if subjectivism could be ruled out as morally wrong, i.e. contrary to the first-order principles of morality. It is not as if the Ten Commandments contain an extra one stating, “The other commandments are all objectively true”. The moral anti-realist may be an error theorist, but he does not have to be, given that morality asserts of itself that it is objectively true. Moral realism is a metaphysical position not a moral position. So there is no analogue of the role of observation in deciding the question: we haven’t discovered that there are unobservable moral entities in the course of our moral deliberations. There is no moral microscope that has revealed to us a world of values previously unsuspected. And similarly for other areas in which realism has been debated: no discovery within these areas will enable us to settle the question of realism versus anti-realism. This is why we speak of meta-ethics, and could equally speak of meta-psychology, meta-physics, and meta-mathematics.

            Here is another way to put the point: it might have turned out that there are no unobservable entities (it is an epistemic contingency that there are), but it couldn’t turn out that moral values are subjective (given that they are actually objective). It is an empirical fact that there are unobservable entities, but it is not an empirical fact that values are objective (given that they are). We didn’t discover empirically that moral realism is true—assuming we did discover that—but rather did so on philosophical grounds, i.e. a priori. By contrast, we did not know a priori that the world contains unobservable entities (microorganisms, atoms, distant galaxies); this we discovered by empirical means. We used science to establish that the world extends beyond the observable. But we didn’t use morality to establish that moral realism is true (for one thing, moral realism is not a moral duty); for that we resorted to philosophy. Thus, given that moral realism is true, it could not have turned out otherwise, whereas scientific realism may have turned out to be false (it is only a contingent fact that some things are not observable). There are worlds “qualitatively identical” to our world that contain no unobservable entities, but there are no worlds “qualitatively identical” to ours in which moral anti-realism is true (similarly for the other kinds of realism).  [2] This is because realism, if true, is true a priori in these areas. It is not an epistemic necessity that microorganisms exist but it is an epistemic necessity that values are objective (assuming they are). Thus scientific realism has a different epistemic status from that of other types of realism.

            There is an explanation from within science of the fact that unobservable entities exist, but there is no explanation from within morality of why values are objective (similarly for the external world, psychology, and mathematics). Some entities are simply too small to be seen given the limited acuity of the human eye, and some are too distant: these entities cannot interact with the eye psychophysically. Physics and perceptual psychology explain why we can’t observe certain things. But morality has no explanation for why moral values have objective existence—why they are not reducible to human attitudes. Nor can psychology, as an empirical science, explain why mental states are not reducible to dispositions to behavior (or some such). Nor can common sense concerning tables and chairs explain why material objects are not reducible to sense data. The question of scientific realism, understood as a dispute about the existence of unobservable entities, is not a properly philosophical question, since it can be settled by appeal to the discoveries of science. That is, we know scientifically that there are things in the world that can’t be perceived by the unaided human senses. Of course, there is plenty of room for philosophical debate about the nature of these entities (they might be ideas in the mind of God, say), but it is not a philosophical thesis that some things cannot be perceived. That is simply a scientific truth. But it is not a moral truth that moral values are objective, or a mathematical truth that abstract entities exist, or a truth of psychology that mental states are inner states irreducible to behavior, or a truth of common sense that tables and chairs are distinct from states of mind (“ideas”). Hence there is no analogue of the demonstrable inadequacy of observation to provide a test of reality that can defeat anti-realism in these other areas.

 

C

  [1] I am not saying that all types of scientific realism are completely hopeless (though I do believe that), only that the kind that equates the real with the observable is. Clearly the positivists advocated this position in order to save the principle of verifiability not because it looks intrinsically plausible. And there is no limit to human anthropocentrism. 

  [2] I am here using the apparatus developed by Saul Kripke in Naming and Necessity.

Share

Noumenal Powers

 

 

Noumenal Powers

 

 

Suppose you hold that the world consists of powers all the way down: all properties consist of causal powers.  [1]Now combine that with Hume’s position on our knowledge of powers: we have no impressions of powers, and hence no adequate conception of powers. Then you are committed to an extreme Kantian view of objective reality: the world is noumenal, because powers are. That is, we have no knowledge of reality as it exists outside the mind; at best we have a kind of structural knowledge of how things hang together, but no knowledge of the intrinsic nature of things. For that nature is essentially a matter of powers, and we cannot conceive of powers as they are in themselves; we are at most acquainted with mere signs of powers, ultimately what happens in our minds as they interact with reality. Thus the world of appearance is cut off from the world of reality: we seem to experience categorical observable properties of things, but objectively reality consists of unperceivable powers. Hume himself didn’t identify all properties with powers, but he did hold (in effect) that causal connections are noumenal; extending his epistemology to a wider metaphysics of properties-as-powers yields extreme Kantianism about reality as a whole. If the world consists of powers, and we can’t form an adequate conception of powers, then we can’t form an adequate conception of the world. We can’t form an adequate conception of what the world fundamentally is.

            Two objections might be made to this sweeping metaphysics and accompanying epistemology. The first is that it is implausible to identify the world with a constellation of powers—there have to be grounding categorical properties somewhere in the picture, because powers cannot stand unsupported. Let us concede the point for the sake of argument: it doesn’t follow that we have adequate ideas of reality, since the grounding properties might be unknown to us. It suffices for the argument that all observable or known properties are powers: even if there are categorical properties at the bottom of reality (belonging, say, to some advanced physics), all the properties we know about consist in powers. So everything we know of the world turns out to be an elusive power—shape, size, color, etc. What we think of as ordinary properties are really powers whose inner nature we cannot discern.

            The second objection is that what happens in our mind cannot consist of powers, because we do know what happens in our mind—we know, say, what pain is, and that it is occurring now. But the powers view of properties can be extended to mental properties by distinguishing appearance from reality: we don’t have an adequate conception of what pain is, but we do know how it appears. Mental properties have causal powers as much as physical properties do—they are individuated by their causal powers—but these powers may occasion only signs of themselves in what passes before consciousness. Pain itself might be noumenal—as is the self on the Kantian conception. The mind might be a hidden reality remote from introspection, made up of powers of which we have no adequate conception (if we follow Hume on our knowledge of power).

            What is true is that appearances cannot be construed as presentations of properties if the (generalized) Hume-Kant view is right. For we do know the nature of appearances, so they cannot be constituted by powers that transcend our grasp; they cannot be collections of properties that present themselves to the mind. No property can be presented to the mind as it is, if all properties are really powers that are inherently inaccessible to the mind. Once we accept the elusiveness of powers, along with the doctrine that properties are powers, we are landed in the kind of epistemology adumbrated by Hume and Kant (Kant merely generalizing Hume). Thus the stakes are high; and idealism threatens. The ubiquity of powers leads to a Kantian view of our knowledge of reality once Hume’s critique is accepted. If powers are metaphysically basic, yet epistemologically elusive, we end up with an unknown reality. What is known is mere appearance from which the notion of property has been expunged (if it can be).

 

  [1] The work of Sydney Shoemaker on properties is an instructive reference, but the view has a wider currency. Phenomenalism in effect holds that all material object facts consist in powers to produce sense experience, and behaviorism holds that mental facts are identical to powers to produce behavior. Generally, it is the idea that reality consists of potentials—of what would happen if. Reality is best captured by conditionals of a certain sort. Everything is the power to do something.

Share

Notes on the Concept of Law

                                   

 

 

Notes on the Concept of Law

 

 

  1. Consider the sentence forms “It is illegal to A” and “It is immoral to A” where A is a type of action (we could also consider “It is impolite to A” and “It is imprudent to A”). These are superficially similar, syntactically and semantically. Both have the logical form of universal quantification: “for any action x, if x is of the type A, then x is illegal/immoral”. Both contexts are intensional to some degree: certainly not truth-functional and arguably referentially opaque—it may be illegal/immoral to kill human beings, but is it illegal/immoral to kill the most dangerous species in the universe (assuming these to be co-extensive)? Both sentences have normative entailments (or corollaries): one ought not to do what is illegal or immoral. But there are also important differences. You can say “It is against the law to A” but it sounds funny to say “It is against morality to A”. You can necessitate a moral principle but not a legal one: “Necessarily stealing is immoral” is true but “Necessarily stealing is illegal” is not. We can paraphrase the legal sentence with “It has been declared illegal to A” but we can’t paraphrase the moral sentence with “It has been declared immoral to A”, since it might not have been so declared. You can pass a law but you can’t pass a moral principle. And clearly the two types of sentence are not synonyms nor even express the same facts: “law” and “morality” do not co-denote. Despite these differences, however, the two domains are tightly connected, though the connection is controversial. Laws can be wicked and immoral but morality can’t be (as opposed to a person’s moral beliefs)—so laws can be criticized morally but morality (itself) can’t be. Nevertheless, laws at least purport to be moral and can be assessed morally—they are not beyond the reach of morality (as taste in food or clothes may be). So the distinction between them is not a complete severance.

 

  1. We can ask what kind of speech act is performed by uttering “It is illegal to A” as we can for “It is immoral to A”, and a familiar list presents itself. Is it a statement of fact (a “descriptive” statement), a command, an expression of emotion, a threat, a prohibition, a promise, an exhortation—or some (or all) of these? Moral utterances invite the same kind of list. In both cases these questions are separate from the question of semantics, specifically truth conditions, and are mainly beside the point (a given sentence with a fixed meaning can be used to perform an endless number of speech acts, semantics being separate from pragmatics). The question of truth conditions is the central question: are the two types of sentence true in virtue of the same kind of thing (fact, state of affairs)? And here there is a marked difference: laws hold in virtue of declarations of a certain sort, but morality does not depend on declarations. This is why a divine command theory of law is not a category mistake while it is for morality (Euthyphro could have been right about what makes something a law). According to “legal positivism” laws arise from human stipulations or decisions or agreements—legislative acts–and therefore they can come to exist at a certain time and go out of existence at a certain time (when they are repealed). But the immorality of stealing is not something linked to time and legislation in this way. We could put this point by saying that a legal system is a “social fact”—one created by a group of people who are responsible for its existence. But merely calling laws social facts doesn’t distinguish law from morality, since a moral system in a society is also a “social fact”: what distinguishes the two is that law has its origin in legislative declarations while morality does not.

 

  1. Some have supposed that “good” denotes a simple unanalyzable property, but no such view has been held for “law”. That is as it should be because it is not difficult to analyze the concept of law into several components (or no more difficult than other complex concepts such as knowledge or game). Thus we can venture the following definition: a law is a legislated norm backed by sanctions. That is certainly not true of a moral precept. We need to bring in sanctions because they are so characteristic of a legal system and because without them law has no bite—people won’t obey laws without sanctions. A possible world in which there is a system of law governing a society but there are no sanctions associated with it is not a real possible world. The sanctions provide prudential motives for action (morality provides its own motivation). A prime constraint on legal legislation is that contradictory laws shall not be passed, and there is a distinct possibility that this could happen if the legislators are not careful (contradictions are not always obvious). But there is no such danger from morality, which is internally free of contradiction, not being the result of human belief or declaration (like reality in general). A legal system is a kind of propositional artifact and it can have defects and gaps in it. Hence laws can be in principle inconsistent as well as immoral. This is why there is no plausible “legal realism” like moral realism—because law is not mind-independent: it is immanent in human practice.

 

  1. We should not exaggerate law’s independence from morality, distinct though these systems are. As noted, laws purport to respect moral principle and can be criticized for failing to do so. Also they arise from motives of a broadly moral nature: they are intended to serve the common good (or at any rate the good of certain preferred types of person). They are not stipulations made in a vacuum but designed to further moral aims. A dilemma has been supposed to arise here: either laws are inherently ethical or they are not. If they are, then law and morality are identical or overlapping domains; but if they are not, then law has no moral force and there is no such thing as legal obligation. It seems to me that there is a third way here: this is the idea that law constitutes a secondary morality existing alongside the primary morality. Law acts like morality without being morality, at least as morality is ordinarily conceived by philosophers. There is a rough analogy with primary and secondary qualities: the primary qualities characterize basic reality while the secondary qualities exist beside them in closer proximity to human sensibility. But both are qualities of objects; it isn’t that secondary qualities are not qualities at all. Similarly laws are moral edicts but they are not identical to more basic moral edicts. Thus we can readily convert a moral precept into a law, as when we declare stealing illegal (or slavery). It doesn’t lose its moral standing by being so converted; indeed it inherits that standing. But it now belongs in a separate cognitive system subject to different constraints and standards. We can imagine beings whose whole moral outlook is constituted by laws (indeed some humans are like this) and it would be wrong to declare them morally void. Children occupy this cognitive territory when their notion of morality is fixed entirely by the commands of parents. We really have two systems of morality in our heads, between which it is easy to get confused; it is not that law removes itself from the realm of morality and becomes completely value-free—as if it were nothing but so much social engineering. This explains why people are often so torn when they perceive certain laws to be fundamentally immoral: this is a conflict within their moral faculties not just a conflict between morality and the extra-moral. The analogy with etiquette may be helpful: are the rules of etiquette simply detached from moral rules, since they are certainly not identical to moral rules? No, because good manners are regarded as a secondary form of morality—parasitic perhaps but not devoid of moral clout. One really ought to have good manners (as socially determined) out of consideration for the feelings of others. We shouldn’t be “etiquette positivists” holding that good manners have nothing moral inherent in them, yet we shouldn’t simply identify etiquette with morality. We have a kind of secondary morality here, not an abrupt switch from the moral to the non-moral. We should picture our moral faculties as consisting of a central core of basic moral principles surrounded by a penumbra of outlying moral systems (habits, proclivities). Law, like etiquette, is an application of morality suited to certain ends, suitably supplemented and adapted. We need to be expansive and pluralist about the nature of moral obligation.

 

  1. Are laws rules? This is not a helpful way to think. They are clearly not like the rules of a game precisely because the practice of law is not a game. The rules of games prescribe (and proscribe) actions that aim to achieve ends by indirect and inefficient means (see Bernard Suits), but the “rules” of law don’t tell us how to play a game using such means—we must obey the law by the most efficient means possible. Nor is it clear what the purpose of legal rules might be—ditto for so-called moral rules. We can talk this way if we like but the theoretical or conceptual payoff is minimal at best, and is likely to promote forced analogies and misleading conceptions. Breaking the law is not like breaking the rules of chess: if you commit a murder it would be strange to be told that you are going to jail for life because you broke the rule against murder; rather, you are going because you murdered someone. The law is no more rule-like than morality.

 

  1. One’s reason for obeying the law can only be prudential (avoiding sanctions) or moral (the law codifies the good); there is no such thing as a specific form of legal obligation or reason for action. In the case of a law perceived to be wicked the only reason to obey it is prudential. However, since the law is a secondary form of morality it does allow for an extra layer of reasons governing our actions: for we now have two sorts of moral reason for acting. One hopes these harmonize (similarly for etiquette) but they might not and then one has a conflict within one’s overall morality. A part of you may judge that a particular law is too strict and inflexible in certain circumstances, going by your core morality, but you obey it anyway because you think that it is basically a good law not wide of the moral mark. The situation is not so different from what we find within core morality itself, because here too we have different systems that don’t always harmonize—as with deontological precepts and consequentialist principles. Arguably, we have two coexisting moralities within us, which don’t always see eye to eye; well, our attitudes to the law are similar in that our thoughts about the law are themselves morally suffused. There are many moral “oughts” not just one, and each occupies a place in our total moral outlook. So the dilemma “moral versus non-moral” as applied to the law is too simple. Legal moralism is thus to be preferred to legal positivism (construed as denying that laws carry any moral weight in themselves), though it is a mistake to try to reduce legal obligation to moral obligation (again, compare etiquette). In any case, there is no category of reasons for obeying the law beyond the moral and prudential, so nothing sui generis about legal obligation.

 

  1. There can be wicked moral beliefs and practices as there can be wicked laws. In the former case wickedness is relative to correct belief: moral reality can correct erroneous moral belief. But in the latter case we can’t say this, not with any plausibility anyway: if a law is wicked we can’t say that it is wicked relative to the correct law, as if this existed independently of human legal systems. There are no ideal laws that we are trying to capture and possibly failing to capture—a legal reality outside of legal belief and stipulation. True, one legal system can be superior to another, but there are no objective laws that set the standard—laws outside of human practice. Morality is the proper source of criticism of laws not supposed ideal Platonic laws. It is the same with etiquette—it is subject to moral criticism but not criticism from some supposed ideal set of rules of etiquette (as it might be, the etiquette of the gods).

 

  1. It may be useful to distinguish between laws as they exist objectively in various social institutions and laws as they are understood by people subject to them. After all, laws only get purchase on people’s conduct by way of their mental representation of them. And the law might have different functional properties in its two manifestations. People often have an imperfect understanding of the law as an objective institution while carrying around with them their subjective idea of what the law requires of them. A philosophy of law should address both topics. In particular, the authority of law really depends on how people understand it, i.e. their disposition to accede to its demands results from their own subjective representation of it. Maybe the external phenomenon doesn’t function as a moral system for people, but their internal representation may: this is where the secondary morality exists and operates. Internalized law functions as an ancillary moral system capable of providing moral reasons even if external law does not. How people think of the law does very often correspond to what they think the law ought be, and that is a moral “ought”. It is the same with rules of etiquette: what the prevailing norms objectively are in a social group is not the same as how individual people conceive of these norms (this can give rise to much social comedy). It is the latter that functions as a secondary moral system.

 

  1. I started by comparing “It is illegal to A” with “It is immoral to A” noting the linguistic similarities. But there is a semantic difference that makes all the difference: the former statement makes implicit reference to certain kinds of symbolic acts while the latter does not. An action is illegal only because it has been declared to be so by some authoritative body: being illegal requires being said to be illegal. So the original statement is tacitly metalinguistic, being equivalent to “Acts of type A have been declared illegal”. But the corresponding moral statement is not metalinguistic, because morality does not depend on human stipulation or decision (or divine). But this important difference does not preclude morality from entering into the law, as if it made the laws merely a set of facts about what people have declared. Laws must proceed from moral motives (possibly misguided) and can be criticized in the light of moral considerations; they are not value-free social facts. Law is better described as a secondary moral system linked to the primary one, though not reducible to it. Imagine a society that instituted a set of laws governing prudential behavior: you must eat this and not eat that, no going out in the cold without warm clothing, no watching too much TV, etc. The aim of these laws is purely to improve the individual’s wellbeing not to govern interpersonal relations. It would be strange to say that this system of laws has nothing intrinsically to do with prudence just because some of the laws might be misguided, and strange too to maintain that the laws are the same as the precepts of ordinary prudence. The laws are something additional to, but reflective of, the underlying precepts of prudence. In time they might become a secondary system of prudence, especially given that sanctions are applied for non-compliance.    [1]

 

Colin McGinn          

 

 

    [1] I wrote up these notes after reading Nicola Lacey’s biography A Life of H.L.A. Hart (2004). I have read nothing in the literature of the philosophy of law but found myself thinking about the questions raised by my reading of this book. These notes merely record my thoughts and reactions and make no claims beyond that. They are not intended for publication.

Share