Skepticism and Time

Skepticism and Time

We can’t be certain the ordinary world of material objects exists: we might be brains in a vat or perpetually dreaming. We can’t be certain that space exists, at least in the form we think of it, for the same reasons. But what about time? We are familiar with skepticism with respect to the contents of time: the world might have been created ten minutes ago and have a very different history from the one we suppose. But might the past not exist at all? Might the future be a figment? What about the present—does it too fall to the skeptic? In the case of the present things look more hopeful: we can know for certain that it exists. How? By a version of the Cogito: if I am thinking now, then there must be a now for me to think in. Every occurrence needs a time in which to exist. Nothing happens without a time in which it happens. Even if I am only dreaming, my dream needs a time to occur in. Thus, we can confidently assert: “I think, therefore the present time exists”. There is no epistemically possible world in which I am thinking but there is no time. What the nature of this time is we may not know, but we know at least that the present moment is real: an existent thing falls under the concept the present, viz. a certain temporal moment. But the Cogito by itself doesn’t take us any further into time: maybe the present time exists but not past or future times. These are distinct existences, after all.

However, on reflection we can deduce the past and the future from the present: for what is present will soon be past and was once future. There can be no present without a past and future. It is a necessary a priori truth that a present moment exists within a series of moments some of which are in the past and some in the future. This follows from the fact that time flows, i.e., moments pass from future to present to past. No moment can be stuck in the present, incapable of becoming past and never having existed in the future. Every moment of your life was once a future moment and will become a past moment—as sure as eggs are eggs (a plain tautology). So, the present contains the seeds of the past and the future: at the moment you think “I think” that moment quickly moves into the past and was once a future moment. Thus, if we combine the temporal Cogito with this logico-metaphysical fact about time, we can deduce the result that past times and future times indubitably exist. What happens in or at these times is not certain, but that they exist (those times) is certain. Time might itself be relative or absolute, a matter of mechanical clocks or of God’s eternal mind, discrete or continuous, infinite or finite—but at least we know that it exists. Time logically follows from consciousness: the one kind of existence leads inexorably to the other. This is something. Matter doesn’t follow, space doesn’t follow, numbers don’t follow: but time does.[1] Time is one of life’s great certainties. Odd that Descartes didn’t make more use of this truth. We don’t need God’s assistance to know that time exists; it is built into the very nature of experience. I can never say, “For all I know, time doesn’t exist”.

[1] We might be able to get from consciousness to matter, space, and even numbers by constructing clever philosophical arguments, but in the case of time we don’t need such ingenuity—time is written right into consciousness as a surface feature. Not the impression of time, mark, but time itself.

Share

Anthropocentric Physical Empiricism

Anthropocentric Physical Empiricism

Empiricism is the doctrine that all knowledge derives from something called “experience”. Alternatively, all (non-trivial) knowledge comes from the senses. Knowledge is ultimately reducible to “impressions” or “sense data” originating in the human sense organs. In some forms it takes on a metaphysical cast: all of realityderives from experience and is reducible to it. Thus, phenomenalism is empiricism with respect to the external world. Both knowledge and reality come down to sense data—epistemological and metaphysical empiricism. But it is seldom observed that this doctrine can be pushed further: all knowledge (and possibly all reality) derives from, or reduces to, the activities of the sense organs. Impressions cause ideas, sense data cause beliefs, but activity in the eyes and ears (etc.) cause impressions and sense data—the latter are “copies” of the former. The physical sense organs are the conduits through which information flows. Let’s be more specific: it is retinal stimulation and eardrum vibration that cause experiences—the activation of the rods and cones in the eye, the quivering of the tympanic membrane in the ear. These connect with deeper structures, such as the optic nerve and the cochlear, but they are where the body and the world make initial contact. So, really, the empiricist should be saying that all knowledge traces back to the retina and the eardrum (also the skin, nose, and mouth, but these are less important to knowledge than the other two senses, especially vision). Clearly, the retina and the eardrum are physical (bodily) and anthropocentric (species-specific): so, knowledge is held to reduce to bits of human anatomy and physiology, these being the causal origins of experience (whatever quite that is). There is no human knowledge but that these bodily structures are involved—that does not go through these structures. Likewise for metaphysical empiricism.

Already we are wondering if that can be right, given that knowledge is mental and the retina and ear drum are physical (biological). But putting that aside, there is this worry: how can such small and localized entities constitute all of human knowledge (and possibly all the world)? Does our knowledge of reality really reduce to irritations in the small patch of tissue known as the retina? You might reply that visual experience is a good deal more than the retina: it has representational content, is conscious, and figures in reasoning. That is what true empiricism takes to be the foundation and form of all knowledge—full-blooded human experience. Then does experience add something to the retinal stimulations? Of course, you retort—we can’t reduce experience to physical processes in the retina! You are quite right, but notice that you have helped yourself to the inner resources of the mind by insisting on the transcendence of experiences over retinas. And that isn’t true empiricism, since the mind is now not a blank tablet on this way of looking at things: it contributes to, enriches, the retinal input—not everything derives from the senses alone. Thus, we have diluted empiricism with rationalism or nativism by crediting the mind with properties not derivable from bare sensory input. The mind is not remotely a blank tablet under the current dispensation; it is a seething, amply furnished, hothouse. So, the determined empiricist might dig in his heels at this point and assert that it is retinal input that is the source of all knowledge (Quine held such a view). The claim might seem preposterous given our customary conception of human knowledge, but the physical empiricist might junk this mentalistic conception and replace it with some scaled-back notion of cortical configurations. Would that be the end of his troubles?

No, because of the implied anthropocentrism. There won’t be anything universal about human knowledge as so viewed. The human eye and ear are quite species-specific (or genus-specific), much more so than our system of knowledge purports to be, which we take to be more universal—objective, absolute. We don’t think our scientific knowledge, say, is relative to us, like our anatomy in general; that would undermine its claim to be knowledge. Rods and cones are hardly epistemological universals, woven into everything we know. Nor is our knowledge of reality confined to objects of a size that can stimulate the retina differentially, thus giving rise to perceptions of particular types of objects (medium sized dry goods); for we know about other things too (e.g., atoms). Consider the following (admittedly extreme) thought experiment: there are microscopic men that are no bigger than atoms, and they have a thirst for knowledge. But their eyes respond only to objects at their scale, seeing only atoms and their parts (electrons, protons). They have no perception of macroscopic objects (as we understand that term), yet they wonder whether the particles they see might compose such large objects. The empiricist philosophers among them insist that all their knowledge is derived from, and reduces to, the evidence of their senses, anything else being problematic at best. It would clearly be wrong for them to claim that reality reduces to what they can see with their eyes, just as it is wrong for us to claim that reality reduces to what we can see with ours—the microscopic and the macroscopic, respectively. Their visual set-up is biased in its picture of the world, just as ours is (very low resolution). Similarly, a giant intelligence that sees only whole galaxies, never their constituent stars and planets, has a biased view of the universe—is, in fact, blind to huge swathes of it. The senses are highly selective and species-relative, providing biased pictures of reality. If knowledge seeks to correct this bias, as it clearly does, sense-based empiricism must be false: our knowledge attains a level of universality, and hence objectivity and absoluteness, that cannot be accommodated by what might be called “retinal empiricism”. We can only satisfy the demands of knowledge by moving away from the senses considered as items of human anatomy. Just so, reality itself possesses a degree of universality that is inconsistent with retinal empiricism. The human senses simply don’t have the scope and generality required to constitute human knowledge or reality as a whole. They are too circumscribed, species-specific, idiosyncratic, and variable to fix any knowledge worthy of the name, still less any reality worthy of the name. True, we can learn things by deploying our eyes, but that is a far cry from constituting the entire nature of human knowledge. We certainly cannot hope to define anything by means of language referring to the excitations of the retina, or the vibrations of the tympanic membrane. Such sensory activities cannot create human knowledge by themselves, nor can they suffice to construct an external world. When empiricism is pushed to the limit its limitations become apparent. The sense organs are not the sole organs of knowledge.[1]

[1] The original empiricists knew little about the workings of the sense organs and tended to adopt a first-person perspective on the nature of perception.  Once we learn more and take an objective perspective on the senses their inadequacy as a source of knowledge become apparent. They are just physical transducers of impinging energy; they are not mirrors held up to reality. All the talk of “impressions” and “sense data” is scientifically naive; a complex multi-stage process leads up to their formation. At what point does the empiricist want to fix his epistemic origins? Aren’t there many things that could qualify as the foundations of the whole enterprise? The lesson is that human knowledge is an active corrective to the senses not a passive reflection (mirroring) of them.

Share

Knife Throwing

Knife Throwing

I have been working on my knife throwing recently. It’s not a mainstream sport perhaps, but it has its own charm. I heard someone the other day describe it as “like darts but more macho”; indeed, but it is more than that. It is technically more difficult to stick the knife in the target than it is to stick the dart in the board, because the knife rotates; so, the skill element is more demanding. Plus, it is more dangerous, potentially lethal. To me it has three characteristics that appeal: aesthetic, athletic, and scientific. The knife flies through the air, spinning beautifully, then it pierces the target with a satisfying thud, as if by magic. Knives are quite beautiful in themselves but their ability to stick in a target when thrown from a distance is a sight to see. The action of throwing a knife so as to achieve this end is athletically demanding and takes a good deal of practice (plus innate talent). There are a lot of clanging misses, rebounding blades, frustrating failures, but when you have the skill down it is like a well-executed tennis stroke. Scientifically, the trajectory of the knife follows strict laws that have to be respected, especially when gauging distance from the target: it rotates at its own pace. The sport is mathematically precise. It isn’t just macho but also artistic, skilled, and scientific. I recommend it.

Share

2024 Resolutions

2024 Resolutions

I don’t have any, except one I can’t implement. I would like to ban all teaching of my work in American philosophy departments. Why should I let the products of my labor be used for free by people who refuse to employ me? Shouldn’t there be a law against this? Shouldn’t I have the right to prohibit this kind of exploitation? But it isn’t so: people can use my work in their teaching completely against my wishes, and be paid to do so! Why should people who have been cancelled have to accept that other people can use their work for profit? Suppose someone uses my writing on the mind-body problem in a couple of classes but would refuse to invite me to give a talk, or even have me on campus: is that morally acceptable? It is not acceptable to me—I don’t want someone like that teaching my work. So, please desist. Moreover, I don’t want philosophers in America to use my work for research purposes—discussing me in print, citing me, or otherwise benefitting from my labors. I wish they would stop, because I have no desire to be part of the conversation in this country. Read me if you must, but don’t take my name in vain. There may be exceptions to this rule, where I would be willing to relax my ban—I may grant special permission to teach and cite me—but in general I forbid people to make use of my work, all fifty years of it. I must be the first academic in history who doesn’t want his work discussed by a large section of the academic community (sic). It’s a pity I can’t bring the weight of the law upon violators.[1]

[1] I have no objection to the rest of the world making use of my work. I particularly resent being taught at the University of Miami, where I am banned from the campus, though there is probably no danger of that.

Share

Does the Mind Age?

Does the Mind Age?

The mind has an age, but does it age? The body also has an age, and it does age. The person has an age, and that thing too ages. How old are these things? The body is probably the oldest, because its existence pre-dates the existence of both mind and person (these two are connected): the fetal body in the womb exists before the mind or person does. Clearly, all three are older than they are customarily taken to be, since someone’s age is conventionally reckoned from the date of birth—and our body, mind, and selfhood pre-date that (to the best of our knowledge). We really have three ages, none of them coinciding with our conventionally defined age. But that is not my question: my question is whether the mind can be said to age (verb), as the body and person can be said to age. The OED gives the following definition for “age” (verb): “Grow old, mature, show the effects of the passage of time; begin to appear older”. What are the usual effects of the passage of time on the human being (also animals generally)? They are very familiar: sagging and wrinkled skin, grey hair, muscular atrophy, stooped posture, slowness of movement, joint stiffness, bone fragility, proneness to fatigue, and the like. These are generally expressed in appearances—the person looks old. Ageing is the process of coming to appear old as thus understood. They are the defining symptoms of age. The person is declared old in virtue of these bodily changes; ageing is the accumulation of such changes over time.

But notice that they are all bodily: nothing about the mind is mentioned. And with good reason—because none of these apply to the mind (you don’t have a wrinkled mental skin, because the mind doesn’t have a skin). It would be a kind of category mistake to attribute such changes to the mind. Based on these criteria, then, the mind doesn’t age. But aren’t they the only criteria of ageing that we have? If so, the mind doesn’t age. The mind changes with time, it grows and matures, it may even decline: but it doesn’t age. Clothes and shoes age, as do houses and cars, as do animal bodies, but minds don’t undergo standard ageing processes: they don’t bear the tell-tale marks of the passage of time (“wear and tear”). Many things have an age but don’t age: regions of space, atoms, oceans, planets, doctrines; and some things have neither an age nor do they age: numbers, universals (according to Plato), time itself, modus ponens, to name a few. So, the mind might belong to one of these groups: minds don’t undergo the ageing process, though they evidently have an age. They change with time, but they don’t grow old, or appear to grow old.

You might reply that minds have their own type of ageing process, their own way of growing old, admittedly not the same as the body’s ageing process: forgetfulness, mental slowness, concentration problems, confusion. But these are not the effects of time (the rub of the world), and they are not confined to the old (people whose bodies have been around for a comparatively lengthy period). Some young people are forgetful, mentally slow, can’t concentrate, and get confused—that doesn’t mean their minds have prematurely aged, or that they are mentally old. What is true is that brains age, like the body as a whole, and this can affect mental functioning; but it doesn’t follow the the mind ages. The mind doesn’t appear old—whatever that might be in the case of the mind. Alcohol and disease can cause these kinds of psychological conditions, but they have nothing intrinsically to do with age. They may be correlated with age, but they aren’t examples of ageing—any more than being unemployed is a type of ageing, or having right-wing opinions.

And isn’t it simply a fact that one’s consciousness does not change its nature as we (our bodies) grow old? It feels the same as it used to when we were young: my visual experience, say, has undergone no degradation due to ageing—it isn’t fainter or slower or more wrinkled. It is the same as it always was. This is why people say, as they grow older, that they don’t internally grow older, as the body indubitably does. Really, the mind is not the kind of thing that ages; to suppose otherwise is a category mistake, based on viewing the mind through the lens of the body. The brain may shrink with age, so that it is appropriate to speak of an ageing brain, but the mind doesn’t shrink—that is just a category mistake. The mind changes during the period known as old age, as it changes during the period known as adolescence, but in neither case is it appropriate to speak of mental ageing—all we have is age-related change.[1] Hence the feeling that I have not aged (my mind, my soul, my consciousness, my-self)—though my body palpably has. Ageing is what you can see in the mirror, but you can’t see your mind in a mirror. Nor can you introspect and notice that your mind appears a lot older than it was a few years ago—though you might notice that you forget more than you used to (as you tend to think about different things now). Change with age is not ipso facto ageing. If someone becomes forgetful at the age of twenty, that doesn’t imply that he or she has aged—they might just have suffered an accident to the head. The concept of ageing is really defined by the various symptoms of age that I listed earlier and has no life outside of these symptoms, but the mind doesn’t exemplify such symptoms—therefore, it doesn’t age. Not that anyone seriously denies this; we simply don’t talk that way in the normal course of things. We don’t suppose that the mind literally ages in the same sense in which the body ages (and hence the person). But the metaphor might prove irresistible, given our tendency to model the mind on the body; it is therefore salutary to inoculate ourselves against such a tendency—not least because of the dangers of ageism as a prejudice. The whole idea of eternal life in disembodied form is premised on the agelessness of the soul, and to that extent is not conceptually incoherent; as the same idea about the body arguably is (how could a material animal body not age?). The concept of mind is the concept of an un-aging thing; the self of the Cogito knows no ageing process. Thus, the negative connotations of ageing don’t apply to the mind (perhaps they shouldn’t apply to the body); the mind remains spanking new, never scuffed and worn, flabby and bent, wrinkled and discolored. Some regrettable mental changes might be caused by the ageing of the body, but they are not thereby instances of ageing. The mind remains forever young.[2]

[1] Would anyone say that a marked increase of intelligence that reliably occurs in one’s seventies is an example of ageing? I think not.

[2] This sharp contrast between the mind and the body–one ageless, the other inevitably afflicted with ageing—is surely part of the human condition, as the existentialists conceived it. We are conscious of ourselves not just as destined to die but also as ageing steadily in that direction, while consciousness itself is free of such degradation. Thus, we are mixed beings, confusingly so. We embody the “contradiction” of both ageing and not ageing, with each vying for supremacy in our self-conception. Am I old or am I young? I am both. Or neither.

Share

Experimental Atomic Psychology

Experimental Atomic Psychology

Is there any evidence for the atomic hypothesis in psychology, however slender? It certainly doesn’t seem to us that our consciousness is composed of little psychic particles separated in space—the analogue of physical particles. But there is one area in which the hypothesis enjoys some phenomenological support—I mean, the experience we have when we close our eyes in the dark. Our visual field seems populated with tiny dots, light against a dark background, and these dots are visited by edges and blobs that move slowly around. The dots are shaped into primitive forms that seem to seek greater solidity and sharpness, like ghosts of the real world. In this experience, admittedly unusual, our visual field appears granular, corpuscular, pixelated, like a pointillist painting—an assembly of point-like particles. Might this be evidence of an underlying particulate structure operating at the level of the brain, one neuron per point perhaps? I want you to do an experiment: next time you are in bed at night close your eyes and focus on the tiny dots in your visual field, trying to get a sense of their sharpness and clarity. Now open your eyes and gaze into the gloom: vague forms will appear in the darkness, say the shape of an overhead fan. Can you still see the dots? If you are anything like me, you will still see them, but they are slightly less well-defined. The shapes perceived have somehow reduced their phenomenological salience without eliminating them altogether. If you close one eye, you find that the dots gain slightly in salience as the external shape is less clearly perceived. And if you close both eyes again, they come back in all their understated glory. The external form has co-opted what was only vestigially and virtually present in those edges and blobs. It would be possible to replicate this experiment more systematically: assemble a group of subjects and gather reports under varying conditions of illumination, beginning with pitch dark. At what point do the visual pixels disappear from consciousness completely? Is there much intersubjective variation? Can input from another sense interfere with the disappearance? I will venture a hypothesis: pixelation is inversely proportional to form—that is, the greater the perceived form the less the apparent pixelation, and vice versa. At the point of ordinary daytime illumination, pixelation is zero, except perhaps in abnormal conditions. When there is no form to see, as in the closed eyelid condition, the pixelation is at its height; but as forms enter the visual field, even in low illumination, they disappear from view. This is not to say they no longer exist, just that we have no awareness of them. Perhaps blind people have vivid pixelation and their pixels never disappear from view; perhaps hallucinogenic drugs can heighten their presence; perhaps certain diseases cause them to occur in normal vision—these are all empirical questions. We do know there is such a thing as visual snow syndrome, which sounds a lot like pathological pixelation. The idea, then, is that the brain employs two mechanisms in the production of visual percepts: one mechanism generates mental atoms or points; the other mechanism organizes these into visual forms, generally controlled by an external stimulus. The mental atoms are organized into wholes that represent shapes and other qualities. If psychological atomism is true, the same should hold of the other senses, though the atoms may be less accessible to introspection. Taste, for example, operates by way of innumerable receptors that work to create (in conjunction with the brain) gustatory points, the totality of which constitute (say) the taste of pineapple. The taste is not an ontological simple having neither parts nor structure; instead, it is a complex sensation made up of many elements (if you have ever partially lost your sense of taste, you will know what I mean). Then the general hypothesis is that all mental phenomena obey the same basic principles—atoms combining to produce complex mental states. Is there anything analogous to closed-eye vision for thought, i.e., unorganized dots of thought awaiting assembly into a coherent whole? Not that I know of, but it would be worth investigating whether certain kinds of degenerative brain disease produce such effects, e.g., Alzheimer’s disease. What about sleep and dreams in normal humans? Rational thought seems to fall to pieces there. Could drugs have this kind of effect? Surely it would be possible for the brain to cease to be able to put concepts together to form coherent thoughts. Couldn’t concepts themselves break down into parts that refuse to join with other concepts (neurons can clearly lose the ability to connect with other neurons)? Empirical, indeed experimental, work could be done to determine the answer to such questions. It need not be left up to philosophy. So, the atomic hypothesis could be subjected to empirical test, beginning with the bedroom experiment I suggested above. Returning to vision, we can confidently report that the retina and the brain have a pixelated structure–rods and cones in the retina, neurons in the brain—so it is on the cards that the mind itself duplicates this structure. We may not be conscious of it (what biological point would there be in that?) but it may yet be present inconsciousness, hovering just below the surface. Eyelid vision hints at these subterranean depths, and it may be that they exist elsewhere too. It is true that we can’t detect the mental particles by the use of particle accelerators that bombard the mind with supercharged particles and reveal the hidden gems, but we have other ways of determining the fine structure of mental phenomena (such as introspecting our closed-eye visual field). The brain is thus (we conjecture) a synthesizer of basic mental atoms that together form mental life as we experience it. First it manufactures the elementary particles, then it assembles them into mental complexes. What the most elementary particles are is, as they say, a matter for further research. We already know the mind is combinatorial at more coarse-grained levels; atomic psychology simply extends this basic idea down to smaller scales. The search for the elusive mental quark is now on.[1]

[1] I first discussed mental atomism in “Consciousness, Atomism, and the Ancient Greeks” in Consciousness and its Objects (2004). Research is proceeding slowly.

Share

Atomic Psychology

Atomic Psychology

Atomic physics has achieved the status of common sense. It is hard now to understand why it took so long to arrive at it. Despite the efforts of a couple of pre-Socratics, it took till the nineteenth and twentieth century till atomic physics came into its own, driven by technology. People just didn’t have the idea of the minute constituents of matter, largely uniform, and constituting the whole of the physical universe. They didn’t envisage a hidden particulate (“corpuscular”) level of physical reality. Same with biology: biologists didn’t get the idea of the cell till fairly late in the game, let alone the molecular structure of the gene. Now biology is an atomic biology: bodies made of organs, organs made of sub-organs, sub-organs made of cells, cells made of nuclei, mitochondria, and other tiny structures, and so on down to biochemical molecules. Atomic physics and atomic biology are just part of the modern intellectual landscape, despite being invisible for centuries. But there is no such thing as atomic psychology. You would think that atomic psychology should be well developed by now, given our close proximity to the mind; but in fact, it doesn’t exist even as a twinkle in the eye of the aspiring mind scientist (I use that phrase because the term “psychologist” conjures up a rather limited picture of what a student of the mind might hope to produce). Why aren’t the atoms of mind staring us in the face, if there are such? Is it because the atomic conception simply doesn’t fit the mind? However, there are reasons to believe that some sort of atomic psychology must be true, even if it is not evident to us introspectively. First, it is hard to believe that mental states, as they are phenomenologically presented and commonsensically conceived, are ontologically primitive; it is hard to believe they have no further analysis—decomposition, part-whole hierarchy. What, do they just spring into being as they are as indivisible wholes? Is there no micro to their macro? Clearly, some kind of breakdown does exist, because there are complex mental states that are composed of simpler mental states (e.g., regret is composed of belief and desire). Also, propositional attitudes have complex conceptual content—propositions are decomposable entities. So, why shouldn’t the breakdown go deeper? Second, there appear to be commonalities between mental states that suggest recurring constituents: for example, pains, though very various, all partake of a single phenomenological quality—painfulness. Indeed, aversive mental states generally share a phenomenological feature: fear, hunger, and sexual frustration all display a quality of disagreeableness that marks them as belonging together. So, is there a psychological atom corresponding to this trait—the “nasty-tron”? It is negatively charged, like the electron, and unlike the “fun-tron” that corresponds to pleasant feelings, which is positively charged (I speak metaphorically). Why not suppose that there is a deeper level of psychological atoms underlying anything we can detect introspectively? Why not go the whole hog and see where this idea takes us—to a panoply of finitely many psychic particles that exist in the mind-brain and combine together to yield what we know as the mind? These particles could constitute a kind of periodic table of psychic elements—the basic constituents of the psychological universe. The situation is analogous to what obtains in linguistics (which is close to psychology): from sentences to phrases to words to morphemes to underlying constituents of morphemes.[1]In other words, the brain is a place where the atoms of mind and language live, hitherto evading inspection. The brain is made of biological cells (neurons—note the suffix), which are made of molecules and atoms, and it is also made of psychical cells that break down further into more elementary components. Then, we achieve unification with physics and biology: psychology emerges as also atomic in structure. There is macro psychology and micro psychology, big mind and little mind. As a bonus, we might find that micro psychology brings us closer to understanding the mind-brain link, because the psychic particles are more intimately tied to the physics of the brain (they might not look much like the macro mental states they constitute). Possibly these mental particles are to be found outside the brain too, so that we end up embracing a sort of panpsychism (God help us), but they might also be peculiar to the brain for some reason. In either case, the mind comes to have an atomic architecture: the gross resulting from the miniscule, the observable composed of the unobservable. It has its lines and points, its planes and solids. The mind scientist will want to trace these compositional relations, discover their laws, and formulate theories that impose order on multiplicity. When asked what his academic specialty is he will say, “Atomic psychology”. Others in his department might reply, “Macro psychology” or “Cosmo-psychology” (aka “social psychology”). Maybe there will be a small sub-department devoted to interdisciplinary work between physicists and atomic psychologists called “Department of Micro Cognitive and Physical Science” (mainly consisting of string theorists and people called “bling theorists”—the ultimate particles of the mind being deemed “incandescent” in some way). The excitement will be palpable, yet dignified—after all, this is Deep Stuff. Seriously, though, we should not dismiss the idea of atomic psychology; and isn’t this what many psychologists have hankered after these many years—simple unanalyzable sensations, elementary conditioned reflexes, “bits” of information, units of psychic force, just noticeable differences, unconscious primitive drives, discrete bumps on the skull, IQ points, little homunculi in the head? Maybe one day atomic psychology will reach maturity just like atomic physics and atomic biology.[2]

[1] Much the same is true of what might be called “atomic logic”—analyzing propositions and their logical relations in terms of logical atoms and their molecular compounds; indeed, just this terminology already exists.

[2] We could also say that mental states must already be partly composed of physical atoms, since their causal powers rely on the actions of physical atoms ultimately. If causal role is intrinsic to mental states, and causal role requires physical implementation, then mental states must harbor a physical atomic structure somewhere in their total constitution. This would mean that pains, for example, have both a physical and a mental atomic nature—both sorts of atoms exist within them. They are not as simple as they seem. The brain is a kind of atomic hothouse, contrary to initial appearances. It is not an undifferentiated grey lump or a continuous flowing river.

Share

The Cruel Gene

          Ditto this paper.

The Cruel Gene

I can forgive the genes their selfishness; it is their cruelty I can’t forgive.[1] I understand their need to build survival machines to preserve themselves until they can replicate: they need the secure fortress of an animal body. But why did they have to build suffering survival machines? Hunger, thirst, pain, and fear—why did they have to make animal bodies feel these things? Granted the survival machines benefit from having a mind, but it was cruel of the genes to produce so much suffering in those minds. Couldn’t they have found another way? Are they sadists?

            The answer is that suffering is an excellent adaptation. Genes build animals that suffer because suffering keeps the animal on its toes. If the body is the genes’ bodyguard, it pays to make the bodyguard exceptionally careful. Since pain signals danger, and hunger and thirst signal deprivation, and fear motivates, the genes will build bodyguards that are rich in these traits. To build a bodyguard that suffered less would be to risk losing out to genes that build one that suffered more. This is why we find suffering so widely in the animal kingdom—because it is so useful from the genes’ point of view. It probably evolved separately many times, like the eye or the tail. Pain also has many varieties, also like the eye and tail. There doesn’t seem to be any complex animal that lives without suffering, so the trait is clearly not dispensable. Surviving and suffering therefore go hand in hand.

            Most adaptations have a downside: a thick warm coat is a heavy coat, brains use up a lot of energy, and fur must be groomed. In fact, all adaptations have some downside, because all need maintenance, which calls upon resources. But pain and suffering have very little downside from the point of view of the genes. They don’t slow the animal down or make it lethargic or confused; on the contrary, they keep it alert and primed. The avoidance of pain is a powerful stimulus; hunger is a terrible state to be in. Animal behavior is organized around these aversive psychological states—and the genes know it. They are cruel to be kind—to themselves: suffering helps protect the survival machine from injury and death, so the animal lives longer with it than without it, with its cargo of genes. The reason the genes favor suffering is not from altruistic concern for the life of the animal, but merely because a longer life helps them replicate. The genes aim to reproduce themselves, and this requires a fortress that can withstand adversity; suffering is a means they have devised for keeping their fortress alive and functioning until reproduction can occur. Since there is so little downside to pain, from their perspective, they can afford to be lavish in its production. Thus the animal suffers acutely so that they may survive. They know nothing of pain themselves (or anything else), but natural selection has seen to it that pain is part of animal life. Nature has selected animals according to the adaptive power of their suffering. Genes for suffering therefore do well in the gene pool.

            Suffering has no meaning beyond this ruthless gene cruelty. It exists only because natural selection hit upon it as an adaptive trait. A mutation that produced a talent for pain, probably slight pain initially, turned out to have selective advantage, and then the adaptation developed over the generations, until spectacular amounts of pain became quite routine. As giraffes evolved long necks, and cheetahs evolved fast legs, so animals evolved high-intensity pain. As an adaptation, pain is very impressive, a clever and efficient way for genes to keep themselves in the gene pool; it is just that pain is very bad for the animal. Pain is an intrinsically bad thing for the sufferer—but it is very beneficial to the genes. But they don’t care how bad it is for the sufferer—they don’t give it a second thought. Pain is just one adaptation among many, so far as they are concerned. Maybe if there was another way to obtain the beneficial effects of suffering—another way to keep the survival machines on their toes—the genes would have favored that: but as things are suffering is the optimal solution to a survival problem. The genes are unlikely to spare the animals that contain them by devising another method more compassionate but less efficient. Suffering just works too well, biologically. It wasn’t used for the first couple of billion years of life on earth, when only bacteria populated the planet; but once complex organisms evolved pain soon followed. It probably came about as a result of an arms race, as one animal competed with another. Today plants survive and reproduce without suffering: it is not an element in their suite of adaptations. They are the lucky ones, the ones spared by the ruthlessly selfish genes. Mammals probably suffer the most, and maybe humans most of all, at least potentially. We suffer acutely because the genes decided they needed an especially finely tuned and sensitive survival machine to get themselves into future generations. The possibility of excruciating torture was the price they left us to pay. They don’t suffer as their human vehicle endures agonies; yet the reason the agonies exist is to benefit the genes. The genes are the architects of a system of suffering from which they are exempt.

            Animals are probably tuned better for suffering than for pleasure and happiness. It is true that the contented sensation of a full belly is a good motivator for an animal to eat, but then the animal has already eaten. Far more exigent is the demand that an empty belly prick the animal into action. The pleasure of grooming might motivate animals to groom, thus avoiding parasites and the like. Far more exigent is the need to avoid injuries from bites and battering. The system must be geared to avoidance, more so than to approach. Thus animals are better at suffering than at enjoyment—their suffering is sharper and more pointed. Some animals may be capable of suffering but not enjoyment, because their pattern of life makes that combination optimal. But no animal feels enjoyment in the absence of a capacity to suffer, not here on earth. Suffering is essential to life at a complex level, but enjoyment is optional.

            This is why I can’t forgive the genes: with callous indifference they have exploited the ability of animals to suffer, just so that they can march mindlessly on. They have no purpose, no feelings, just a brute power to replicate their molecular kind; and they do so by constructing bodies that are exquisite instruments of pain and suffering. If they were gods, they would be moral monsters. As it is, their cruelty is completely mindless: they have created a world that is terrible to behold, yet they know nothing of it. It just so happens that animal suffering follows from their prime directive—to reproduce themselves. Animal suffering is how the genes lever themselves into the future. It is one tactic, among others, for successful replication. Its moral status is of no concern to them. The genes are supremely cruel, but quite unknowingly so—like blind little devils.

Colin McGinn

[1] I indulge in rampant personification in this paper, knowing that some may bristle. I assure readers that it is possible to eliminate such talk without change of truth-value. Actually it is a helpfully vivid way to convey the sober truth.

Share