Parasites and Disgust

Parasites and Disgust

What is the evolutionary origin of the emotion of disgust, along with its behavioral expression? Why was it selected? I will suggest that parasites played a vital role.[1] A desideratum of any theory is to distinguish disgust from two other emotions easily confused with it: fear and aesthetic revulsion. Fear is much broader than disgust and not all disgust is accompanied by fear. Nor is disgust the same as revulsion at the ugly or deformed or merely kitsch; we don’t feel nausea when confronted by a burn victom, say. Disgust is something quite specific; it has a specific type of stimulus. So, what in particular in the environment of our ancestors called for the selection of this emotion? What perceptible thing made it a useful emotion to have? I hypothesize two principal types of triggering stimulus: intestinal worms and corpse-feeding maggots. There was a time when these were common sights—the time before toilets and graveyards. Feces strewn about; bodies left to rot. It would be easy to become infected in such an environment, especially by the eggs of these revolting creatures. One might tread on a turd or consume rotting flesh (food being short). Imagine a time when our hominid ancestors felt no disgust at these things, scarcely even understanding their nature: feces and corpses were not avoided or felt to be revolting. There would be rampant parasitism by the organisms in question—as by lice, bed bugs, ticks, and fleas. Those infected would suffer the consequences—malnutrition, diarrhea, bowel pain, etc. This would not be good for survival and reproduction. Such individuals would be selected against. There would be no medical treatment and no knowledge of causation. The worms would have it their way. There would be a desperate need for an anti-parasite adaptation. Thus, disgust arose—the feeling and the behavior. Worms in stool would henceforth become the object of intense revulsion and avoidance. Rotting corpses riddled with maggots would be run a mile from. Both would come to have an appalling smell. It isn’t that these things scare you like saber-toothed tigers or perilous precipices, or that they evoke strong aesthetic distaste; rather, they encourage sedulous avoidance and a powerful disinclination to eat in their presence. In the human arms race against intestinal worms, disgust became a powerful motive to take appropriate evasive action. Call this the parasite theory of (the origin of) disgust: the emotion of disgust is parasite-specific, parasite-directed; it isn’t some general danger-avoidance emotional reaction. If there were no parasites, there would be no disgust—though plenty of fear and ugliness aversion.

Unlike the pathogen theory, the parasite theory locates disgust in perceptible facts—not in invisible germs (bacteria and viruses). People had no knowledge of such things back when disgust evolved, and certainly no idea of their role in causing disease. But worms and maggots are all too perceptible, especially if they appear in one’s own stool (sorry, sensitive readers). Notice that it is not morphology alone that triggers the disgust reaction—spaghetti doesn’t elicit disgust. It is morphology in the presence of feces and corpses. The parasite would be just as abhorrent if it were shaped like a leaf or even looked like a flower. It’s the context that matters—worms in feces. This configuration would likely generalize or radiate outwards—anything to do with feces or corpses would be tainted by the disgusting. Raw meat, urine, blood, decay, the anus, bodily fluids, internal organs, insects, snakes, rotten food, certain animals, the slimy, the squirming, the dirty—all these would come to be found disgusting to different degrees. But none so much as the primal objects of disgust—intestinal worms and flesh-eating maggots. The very word “parasite” would come to elicit feelings of revulsion (it just means “one who eats at another’s table” etymologically). It is easy to attach disgust to quite innocent things by likening them to the primal objects of disgust—calling someone a “worm” or a “piece of shit” or simply a “parasite”. It is not so much that shit itself is disgusting, or even dead bodies; it is their tendency to provide a home for parasites. We don’t find caterpillars disgusting despite their similarity to worms, simply because they are never found in shit or dining on corpses. Parasitism is the culprit not its physical components. That is what revolts us, nauseates us, makes us scurry away. What are we least disgusted by? Rocks, clear water, mathematical objects—things that know nothing of parasites. We love diamonds and gold—neither parasites nor parasitized. But what if diamonds became parasitic (or always had been)—finding their way into our bowels, exiting in our stool, causing illness and discomfort? Would we be quite so happy to display them about our person? It’s not the physics; it’s the parasitic nexus. Anything behaving like an intestinal worm is going to disgust us, because it was the original reason for developing the disgust reaction; and these creatures cause a good deal of suffering and death. It is perhaps imaginable that in some possible world these self-same creatures, with the same intestinal life-style, might not be objects of disgust, simply because they are good for the host, maintaining a healthy gut, fighting off infection, preventing colon cancer, and so on. Natural selection dislikes only what hinders survival and reproduction, and in our history intestinal parasites certainly do that.[2]

How does the parasite theory bear on the meaning of disgust—the thoughts it occasions, its mode of presentation to the sufferer? In particular, how does it bear on the idea that intimations of life and death are integral to feelings of disgust? It bears on it quite naturally: intestinal worms and corpse-eating maggots areliving things ensconced in dead organic matter. Is there anything more disgusting than worms writhing in feces (yet they are just living their natural evolved life)? The living intertwined with the dead, contrasting with it. Or maggots feasting on dead flesh, deriving life from death. There is nothing inherently disgusting in life consuming death—that’s what eating is—but the parasite evokes a strong feeling of revulsion, because we are all too vulnerable to parasites ourselves. We think of ourselves parasitized, weakened, on the brink of death—that is what it means to us, what it symbolizes. Parasites could be as fatal to our ancestors as predators, and the genes know this; they must adapt or die, so they engineer a counterattack, viz. disgust. Disgust is thus hedged about with looming death, squirming life, the fight to survive—all the apparatus of mortal existence. The gods are never disgusted because they cannot be parasitized; nor do they have to worry about death. For us, though, disgust and death are never far apart—death at the hands of those nasty little parasites. There is an urgency about it, because parasites are urgent business, not to be trifled with. We are largely free of parasites these days (though not all of us), but once they were the bane of our existence—we needed defenses against them. Thus, we acquired a gene for disgust. It’s dirty work, but somebody has to do it.[3]

Like many of our emotions, disgust can seem extreme, hyperbolic, overly dramatic. Do we really need to have such a strong reaction to rodents and earthworms? That seems true of us in our present environment, but we have to remember the environment in which our emotions evolved, many hundreds of thousands of years ago. Back then, life was exposed, dirty, uncivilized, and full of unpleasantness; and it was ridiculously short and often malnourished. Disease was rampant and often fatal. In these circumstances extreme emotions were the order of the day, precisely calibrated to deal with the harsh realities of daily life. We needed to feel well and truly disgusted, or else. God knows what kind of intestinal ill-health these poor people had to put up with! Quite possibly, they all suffered from intestinal parasites from an early age—contagion would have been as easy as sneezing. Just consider a typical family’s lavatory arrangements! Then the parasite theory comes to seem eminently plausible, because the problem was so widespread and terrible (is that why Neanderthals died out?). Intestinal worms would be on everyone’s mind, because in everyone’s body, though I doubt they had much idea about what they really were (they had only de re beliefs with respect to them). The human animal needed a specific weapon to fight against them, and disgust is what natural selection came up with (killing the worms by hand would hardly be a solution). We carry the same emotion in our brains today, extreme though it may be. It is easily evoked, protean, and powerful; it can be exploited unscrupulously. If you liken immigration to parasitic invasion, you get a visceral result (literally). We need to tame the emotion while recognizing its primal evolutionary origins. We should not assume it is always rational in the world in which we now live.[4]

[1] I had not thought of this theory when I wrote The Meaning of Disgust (2011) and had not encountered it in the literature I consulted. I don’t know if anyone else has thought of it in the interim. I now wonder why I missed it.

[2] Maggots (fly larvae) can infest the skin and be hard to remove (myiasis); they too are creatures that need to be avoided and are easily contracted. Our ancestors would be as much victims of these as intestinal worms, though with fewer debilitating symptoms. And there are other types of worms that can get inside the human body and cause problems, more or less severe. Parasites are a fact of life and need their own method of resistance, beginning with the emotional.

[3] Disgust is akin to pain: an unpleasant feeling installed to prevent damage to the organism. We feel pain in response to damaging physical stimuli and we feel disgust in response to damaging parasitic organisms. And isn’t there something phenomenologically similar about the two, as if disgust were a subspecies of pain (it is certainly highly disagreeable)? Punishment can take the form of inflicting pain or inflicting disgust.

[4] The problem of disgust presents itself as a riddle (hence philosophically interesting): what biological function does it serve, why is it so extreme, why are its objects so various and seemingly disjointed? What is it really about (we know what fear is about)? I think the theory presented here does much to solve the riddle, to make sense of a puzzling psychological phenomenon. At bottom (so speak) it is about worms in the gut, though it ramifies alarmingly. It has more specificity than we might have supposed.

Share

Life: A Synthesis

Life: A Synthesis

I am going to attempt something both ambitious and modest: synthesize the various elements of the Dawkinsian view of life as we know it. We are familiar (I hope) with the pillars of the Dawkins’ world-view (zoological philosophy): the selfish gene, the extended phenotype, the genetic book of the dead (the textual body). Genes as immortal self-replicators, the organism as gene vehicle, the phenotype extending beyond the body, the informational content of the genes and the body in relation to past ancestral worlds—all of that. I will say nothing of this by way of defense or explication; my aim is purely to synthesize. How do the pieces fit together? The first thing I want to notice is that the addition of the textual body (and mind) supplements the picture of the selfish gene and the extended phenotype: for we now have the selfish textual gene and the extended textual phenotype. We already knew that the genes are symbolic (this is a commonplace in genetics) because they contain plans for the construction of bodies during embryogenesis—they symbolize bodies—but we now know that they also symbolize past worlds (sometimes lost worlds). They look backwards in time to ancestors as well as forwards in time to progeny. I would even say that they know the things they symbolize—they “cognize” them. They are thus selfish, symbolic, and cognitive (“the epistemic gene”). Genes (DNA) are both ruthless self-replicators (“selfish”) and avid story-tellers (“books”). They narrate and regenerate, represent and survive. The more they survive the more often they get to tell their stories. If they were people, they would write books and help no one but themselves—bookish egomaniacs. Literary self-advancers. In making copies of themselves they re-publish their own literary works (the information about past and future they carry). And they have a sold a million copies, to understate their market success. Not very nice maybe, if they were people, but undeniably prolific and powerful, unswervingly self-promoting. As to the phenotype (that was the genotype), its extension now includes its textual component: not just internal organs and skin but also a library of books about things. Since aboutness is a type of reference, we can say that the extended phenotype includes the reference of the symbols in these books—the things in the past that the symbols stand for. The reference relation is not “in the body”; it holds between internal symbols and remote-in-time real-world entities (e.g., deserts of the past). It’s not just beaver dams and anthills but also objects referred to—the extended phenotype stretches back in time (it includes that past desert emblazoned on the back of the horned Mojave lizard). We get the extended semantic phenotype, not merely the extended physical phenotype. The phenotype includes the external environment and reference to it. This is not the old model of a brute physical object, a biological atom; a life-form has words written into it, and their reference reaches back millions of years. We have the literary body as well as the literary gene. If the body were a person, it would be devoted to ancient history. Indeed, the outer products of an animal’s labor (nests, dams, bowers) themselves bespeak ancient worlds, containing records of ancestral life; we can read the past off them. The book of the dead exists outside the animal’s body as well as inside it—the textual extended phenotype. It’s like an actual library located some distance away.[1] That is what is getting selected by natural selection.

I would like to draw a diagram of life as conceived by these concepts, but I can’t (not here anyway). What I can do is describe a picture of life as so conceived—the picture suggested by the Dawkins biological philosophy. You are welcome to draw the picture yourself. First, draw a circle that will depict a nucleus (like an atom or cell): this will be filled with DNA molecules—genes, replicators, selfish little buggers. I see this as colored red. Write inside this circle “texts” and “me-me-me”, so that you notate the nature of the enclosed entities. Around this circle draw a larger circle named “organism” (I see it as beige); inside this circle write “vehicle”, “text”, and “mind and body”. This will be the whole organism as customarily conceived. The DNA sits inside it, protected by it, carried about by it. On the right, draw a broken line with an arrow pointing to the future: this depicts all the copies (replicas) produced by the genes sitting inside the organism. This is gene survival, the point of the entire contraption—it’s a machine for propelling genes into the future. Pointing left, we have a solid arrow harking back to the past; write next to this “ancestral worlds”. Both arrows together depict the story of the life-form in question—where it came from and where it is going. Finally, draw some arrows (two is enough) depicting the extended phenotype of the organism, perhaps with a fuzzy picture of what this might consist of (a nest, an anthill). You might notate these arrows with the words “external text” just to be pedantically correct. So, that’s it, drawing complete. It depicts a nucleus of self-replicators driving the machine forward, calling the shots, surrounded by an obedient casing of flesh and text, suspended between past and future. It is a dynamic not a static system. It is what evolution has manufactured—a device for preserving little chunks of chemical substance, ultimately. Not that the life-form is reducible to chemical substance, but the properties of the preserving body are geared to performing this task. Organisms are the result of chemical propagation, shaped by natural selection. That, essentially, is what the Dawkinsian philosophy tells us when fully synthesized.[2]

[1] The human extended phenotype actually includes our written products as well as our technology—libraries as well as locomotives.

[2] I have been reading Richard Dawkins from 1976 to now, from The Selfish Gene to The Genetic Book of the Dead and everything in between. I fancy I know his stuff pretty well. This is my brief attempt to bring it all together, neatly and comprehensively. I only wish I had more of an opportunity to discuss it with him.

Share

Economics and Ethics

Economics and Ethics

Economics is the domain of the selfish. Ethics is the domain of the selfless. So we have been schooled to think. In economic activity (exchange, purchase) we acquire goods: we benefit from what we receive; we get what we want. The act is essentially selfish—self-interested, even greedy. Ethics doesn’t come into it, except negatively. We may be berated for our lack of concern with less fortunate others. Charity belongs to ethics: to give selflessly, expecting no return. We benefit others not ourselves. The charitable person is a good person; the regular consumer is something less than that—morally neutral or morally deficient. It is better to give than to receive, we are told. The person who gives, gives, and gives is better than the person who buys, buys, and buys. The buyer is self-indulgent; the giver is self-sacrificing. The buyer cares only for himself; the giver cares for others. Egoism versus altruism, me versus you. This fits with our puritanical streak: we shouldn’t indulge ourselves; we should help others. It’s not Christian to buy, but it is Christlike to give—to sacrifice for others. What you give away you cannot spend on yourself, so charity is an act of ascetism. And the more you give the better you are. There are some who maintain that we should all give away as much of our income as will lead to material equality and stop our rabid consumerism. The charitable act is the right act; the purchasing act is the wrong act (given the state of the world).

Is all this true? There are two sides to the question: is buying morally questionable, and is giving morally unquestionable? The first question has obvious and well-known replies: in buying we give as well as receive; we stimulate the economy leading to greater prosperity; we treat others as equals not victims or incompetents. Our motives may be selfish, but selfishness can lead to good consequences; and anyway, we may take pleasure in benefitting the seller (what will happen to her if we refuse to buy from her?). A thriving active economy benefits all. In buying we give the other employment, self-respect, and a source of wealth. That is why spending is morally better than saving: saving benefits no one; spending encourages economic progress. A society of rich misers is a stagnant society. Thus, the invisible hand, accidental altruism, selfish selflessness. Egoism entails altruism. Buying is not ipso facto an immoral act; in fact, it is quite admirable in its way. But is it as admirable as pure giving (buying is impure giving)? On reflection, charity looks uncomfortably close to theft. Someone else takes from you and gives you nothing in return—they benefit from your labor. They steal time from you, in effect. That may not be their motive but they end up in the position of the thief—better off at your expense. They do no work, yet they benefit from your work. You might be a coalminer giving to a charity for unemployed miners—you work at the coal face while they lounge at home, funded by you. You may also be pressured to give in this way by social sanctions, even ostracism. You may resent it. It doesn’t feel right to you. But it’s better to give than to receive etc. What if charity were mandatory by law? Would that be a good law? Robin Hood used to rob from the rich and give to the poor—yes, but he did rob the rich. What about robbing the moderately well off to give to the slightly less well off? It’s still robbery, and isn’t robbery wrong? Isn’t giving to charity really like working for no pay? And isn’t the recipient of charity in the enviable position of being paid to do nothing?

There are other questionable aspects of charity. When does it become folly not generosity? Suppose someone of moderate means decides to give away most of his money to his well-off friends: he thus suffers a significant reduction in his quality of life while benefitting those who need no more income (they buy fancy hi-fi equipment with his donations to them). Is that charity? If it is, it is pretty stupid charity, certainly not morally required. But when does giving slide into the silly category? There seems to be no principled answer except when the recipients are in dire straits (how dire?). Second, there is the question of respect: it is natural to feel embarrassment or shame when receiving charity (some reject it on principle); and the giver can easily slip into a sense of superiority. That doesn’t happen in economic exchange. Each party both gives and receives, but in charitable giving there is an asymmetry: the giver has the better hand, is morally and financially superior, can feel a glow of self-satisfaction. No one really wants to be the object of other people’s charity. It would be better if no charity were needed. Third, charity can be morally wrong if it discourages self-sufficiency and creates dependence. It can stifle economic development. It treats people like children. It undermines self-esteem and energetic activity. It is best seen as a stop-gap measure not an ongoing policy. Fourth, it creates obligation: the recipient is obliged to be grateful and to show gratitude. Not so in the case buying and selling—no such obligation is conferred. No one is doing anyone a favor. But doing someone a favor is not doing them a favor, because favors need to be repaid. Beware of favors—they will need to be returned, sometimes with interest. Letters of thanks were the traditional way of responding to an act of giving: these took time and energy to write, and they had to be good. You don’t have to do that if you buy something from someone. The recipient becomes indebted, and nobody enjoys that. Don’t you feel a touch queasy when receiving presents that you don’t reciprocate (birthdays as opposed to Christmas)? You become the object of someone else’s generosity; you feel a powerful obligation to express gratitude in fulsome terms, not always sincerely. You are put in an awkward situation. Doesn’t a part of you not want to receive presents—to become the recipient of someone else’s charity. Charity is thus hedged about with moral perils, not always clearly avoided. It would be a better world if it were not needed or practiced.

The general point is that buying gets a bad name while charity is overly prized. I’m not saying charity is never morally indicated or required, only that it is not the unalloyed virtue it is commonly supposed to be. It is in the nature of a regrettable necessity as the world stands not an indispensable component of the ideal society (in that society charity does not exist). Nor am I saying that all buying is good; it depends on the consequences (also the motives). But it is not bad in virtue of being self-interested—that is a misplaced indictment of the economic act. The economic act is a bit like the sexual act—a two-way street in which both participants benefit (ideally). It is reciprocal, symmetric, mutually beneficial, not a sacrifice by one to bring aid to another. Economics is ethical in this sense—generally a good thing. It is not a domain in which the ethical is irrelevant (or “unscientific”). The conventional division between the amoral economic and the moral charitable is too simple. John Maynard Keynes viewed economics as a “moral science”, and he was right to do so. So was Adam Smith right to emphasize the power for good inherent in economic activity, as opposed to pure altruism. The idea of the virtue of self-sacrifice plays too large a role in the old (Christian) way of thinking. Selfishness can be good, and unselfishness less than good.[1]

[1] Smith and Keynes were both interested in economics as a means for improving the human condition, and therefore morally motivated. Economic action is a type of moral action, to be assessed as such, positively or negatively. Sound economics is sound ethics. We don’t hear much about this in the Judeo-Christian moral tradition—entrepreneurial virtue, business ethics. It’s mainly prohibition and self-denial, not enterprise and self-assertion. One does not hear any commandments to build and develop, invest and labor, keep interest rates down, avoid inflation and deflation. Wherein is it stated that it is God’s will to divide thine labor? On the contrary, the money changers are regarded as the epitome of depravity; but aren’t they just currency traders with a place in a thriving economic system? Money is not the opposite of morality but one of its tools. Charity is what we resort to when economics fails. Economics should recognize its ethical dimension and ethics should welcome economics to the fold. Productivity is good.

Share

Archival Minds

Archival Minds

Richard Dawkins’ The Genetic Book of the Dead (2024) advances the thesis that an organism’s body is like a book describing ancestral environments. The genes encode facts about how the world was when the organism containing them evolved. We can thus infer the past state of things from the current state of an organism. The first chapter “Reading the Animal” gives the example of the horned lizard of the Mojave: its skin can be read as “a vivid description of the sand and stones of the desert environment in which its ancestors lived” (4). Then Dawkins states his “central message”: “the whole body through and through, its very warp and woof, every organ, every cell and biological process, every smidgen of any animal, including its genome, can be read as describing ancestral worlds” (4). We can call this conception “the textual body” (syntactically like “the selfish gene” and “the extended phenotype”), though Dawkins himself does not employ that phrase. Natural selection sees to this, because an animal must be adapted to the environment in which it evolves—not to anyenvironment. In particular, we cannot deduce the organism’s present environment from its present body, since things may have changed (nor its future environment). The textual body is an essentially historical text—ancient history, in fact. We can expect to have a chapter or two on life in the sea, even in organisms long living on land, because their ancestors lived in the sea. The human body will contain a description of life at sea, overlaid by more recent chronicles. I will extend this idea to the mind of an organism, if it has one: the mind too is saturated with talk (text) of how things used to be in our ancestors’ worlds.[1] The mind is a book of ancient lore, of distant pre-history, of bygone formations. Sherlock Holmes could deduce from it all manner of facts about how things used to be. It might even tell us things about the past that we wouldn’t otherwise know. The mental text might be esoteric.

What kinds of facts might it disclose? Facts about the geological environment, the predatory environment, the social environment, the climate environment, the cosmological environment. And what aspects of the mind might do this? All (inherited) aspects, if Dawkins is right (and I don’t doubt he is): perceptual, sensational, emotional, cognitive, linguistic, structural, qualitative, etc. The mind could be as historically informative as the body, except now we are inferring the physical from the mental not the physical from the physical (skin to desert). But we are going to need to be bold, because this stuff is shrouded in mystery, in the mists of time (“mistery”). Even if our best guesses are wide of the mark, they can provide a taste of what this kind of hermeneutics will involve—giant leaps of imagination (it isn’t all as easy as the lizard in the desert). We are trying to read the distant past of the physical world from its traces in the contemporary mind: from inner to outer across enormous stretches of time. So, hold onto your hats, cut loose, never fear! Let me begin with pain: what can present pain tell us about past environments? I think it is clear that the present sensitivity to pain in mammals is surplus to requirements: we just don’t need to be as stuffed with pain receptors as we are, and with things as painful as they tend to be. We are over-pained. Why? First, observe that fish do not appear as pain-rich as we are (we mammals); sure, they feel pain, but it is not at the mammalian level. We might well suppose that this is because their environment is not as full of pain-inducing stimuli as ours: they float comfortably in water, not in contact with rocks and pointy objects. They don’t fall down on hard rock or get hit by flying objects or regularly cut themselves, as we do. Now consider the transition from sea to land—fish-like creatures stumbling across rocky terrain, falling, getting cut. They need to develop better pain receptors, pronto, or else death will pay them a visit. So, they become exquisitely sensitive to injury of all kinds; their soft bodies become equipped with pain generators (not hard insensate shells). Once installed these adaptive mechanisms remain, even when the environment becomes kinder to their bodies. Thus, we can infer from mammalian pain surplus that life came from the sea. If the seas had vanished from the earth in the interim, we could deduce that seas once existed, in which life flourished: for that is the best explanation of our current talent for feeling too much pain. Our painful minds entail past seas in which our ancestors (fish) lived. That was once our environment, so that environment had to exist back in the day. A watery past follows from a surfeit of pain. The existence of H2O can be inferred from the existence of pain; not present H2O, mark you, but past H2O. There is (excessive) pain on earth now, therefore there was water in the shape of seas then. Of course, we already knew that, but what is interesting is that it can be deduced from facts about the mind, if we allow ourselves some imaginative leaps. As the existence of the self follows from the fact of thinking, so the existence of past watery expanses follows from the fact of hyper-painfulness (there should be a Latin term for this type of inference). Suppose sea-dwelling creatures felt no pain at all, while landlubbers did, and that the latter evolved from the former. That would tell us there had to be an adaptation to pain during the transition to land, so we can infer an oceanic lifestyle from our present over-sensitivity to pain (imagine if mammals were more armor-plated now yet still pointlessly felt intense pain).

That was a primer in textual-mind reasoning, intended to dip you into the deep end (pun intended). We won’t need to get quite so speculative as we go along. Consider, then, visual sensations: they require the existence of light—they are as of things bathed in light (the world doesn’t look dark all the time). These sensations evolved many millions of years ago and were adapted to the then-environment. The world might have gone dark between then and now, yet the sensations would still be as of light. We might be living in total darkness but our visual sensations would still be suffused with appearances of light; the Sun might have gone out of existence centuries ago. We can’t infer the real existence of light now from our sensations of light now, but we can infer the past existence of light from our present sensations of it. That is, we can infer the existence of the Sun at the time that sensations of light evolved—say, 400 million years ago. We know that the Sun existed back then because visual sensations were adapted to light and that is where light on earth comes from. We can deduce astronomy from biology! Visual psychology implies a star radiating light energy to earth: we can read this in our psychological book of the dead. There had to be a sun 400 million years ago whose light reached earth or else light-filled visual sensations would never have evolved. Visual phenomenology implies stellar astrophysics. If we encounter aliens with a similar visual phenomenology, we can infer that they evolved within striking distance of a sun. If the universe contains sensations of light, then it must contain suns, given reasonable assumptions. This is fundamentally because of the way natural selection works to produce animals. The same argument can be given with respect to sensations of space and time: sensations of these dimensions can only exist because of the real existence of space and time, given the Darwinian theory of evolution. The sensations had to have originated in space and time in order to be of space and time, because they are adaptations to space and time. Space and time could go out of existence and sensations of them remain, but they had to exist in order for the sensations to arise naturally. Of course, we already know that space and time existed during the evolution of minds, but the textual mind theory allows us to infer this from the way minds now are. The evolved mind is a repository of historical (cosmological) information.

Emotion works the same way. Present emotions betoken past realities. The point is familiar enough: fear of heights implies a past life in the trees; fear of snakes implies an abundance of dangerous snakes way back when; disgust at insects indicates a plethora of annoying insects in olden times; fear of wild animals (especially big cats) suggests a history of predation by same in the unprotected past. Then too, we have the things we like: floating in water, climbing trees, relaxing in the sun, a taste for certain kinds of food. These conjure up an aquatic past, an arboreal homestead, outdoor living, available past nourishment. The book of the mind describes the bad, the good, and the ugly—what life on earth was like long ago. It describes the way the world was when our ancestors first set foot (or fin) in it. We know from our present mental attributes that the world contained depths of water, habitable trees, warm sun, and strawberries (or other fruit). The world had to be a certain way then for animals to have the sorts of mind they have now. And it is possible for the world to change and yet the mind plods on in its old ways; environments on earth do not always remain constant. For example, the human environment has become far less dangerous than it used to be with respect to wild predatory animals; we don’t die from cat attacks that often these days. Yet we seem remarkably fearful of not very much—hence phobias, groundless anxiety, exaggerated fears. Is it that we are suffering from a holdover from the bad old days? If so, we can reasonably infer that the past contained more frightening things than the present does—that it was objectively more dangerous then. Our emotional minds haven’t caught up with the new realities. We can deduce from our excessive fears that in the past things were a lot nastier than now—that we humans had it worse then. Emotions are actually a rich source of historical information, because they speak of what most concerns us—our wellbeing, life and death. Our present overdone emotions tell us that people died earlier in the old days, and probably in nastier ways.

Less familiar is the question of more general and abstract features of the mind in relation to our evolutionary past. This question is of special interest to philosophers. First, what can our reasoning abilities tell us about the past, in particular our inductive reasoning? They tell us that in the past the world was regular and predictable—that nature was uniform. For, if it were not, we would never have evolved the habit of predicting the future from the past; it would have done us no good. If the world were chaotic, induction would be useless; but it is not useless, so the world is not chaotic. Or rather, in the past nature was uniform because we evolved the habit of assuming as much; it wasn’t non-uniform then. It might become non-uniform, but we know it was once uniform—or else induction would never have got a foothold in the animal mind. Again, we have good reason to believe this on other grounds, but the textual mind delivers the same result from a fresh angle. The archives of the mind keep a record of how things were, and things used to be reassuringly uniform (and presumably always were, given that nature would not suddenly turn uniform when intelligent inductive animals began to evolve). Inductive reason is a snapshot of nature’s inherent uniformity. What about intentionality? Can we deduce anything about past reality from intentionality? Intentionality is what permits the mind to take distinct and distinguishable objects as objects: I am thinking of this thing and not some other thing. This capacity must have evolved at some point, probably at the very birth of mind; and it must reflect some general truth about the world in which it evolved. What is that general truth? Simple: that the world consists of discrete objects distinguishable from one another—pebbles, people, points of light. If that were not so—if the world was a formless mass—then intentionality would never have evolved (not the kind that animals on earth actually have). Intentionality implies objective discreteness. This is an interesting result, because it shows that intentionality has objective preconditions; it isn’t just a brute inexplicable fact about the mind. It evolved in order to exploit the granular structure of the world. The two things go together, the one implying the other. This quality of mind speaks of the pluralism of reality, of its separation into parts. Finally, consciousness: what does its existence tell us about reality in the past? It is hard to say, but here is a speculation: it tells us that the past was complex. Consciousness as it now exists seems tied to complex information processing, requiring a complex brain. Let’s suppose this is so; then we can say that it evolved under environmental conditions of complexity (say, in the context of predation). If the ancestral world was never complex, never a problem, consciousness would never have evolved—reflexive behavior would suffice. So, consciousness tells us that the world was challenging and complex at the time it evolved. We can read this off consciousness, projecting backwards in time. The unconscious, by contrast, could be less complex, more reflexive. At least this gives us something to say about the significance of consciousness in the book of the dead. Every trait must have somemessage about the early environment and this gives us an idea to work with. It’s like memory: memory tells us that the passage of time existed in our evolutionary past and that the lifespan of our ancestors made it a useful trait to possess. There were facts to be stored and that’s why memory evolved; we can infer the former from the latter. There were complex problems to be solved, so consciousness came to exist; there were facts to be stored, so memory came to exist. The world presents certain kinds of phenomena and natural selection acts accordingly; we can infer these phenomena from the present contents of the mind. The mental archives are full of interesting bits of information, relics of a bygone age, or universal facts of terrestrial existence. The body and mind together yield a rich history; they mirror the past of planet earth. The lizard’s skin reflects the desert it lived in; the mind’s attributes likewise reflect the wider world in which they came to exist. There is a kind of holism at work here linking the organism to its (original) environment. The life of the world is recapitulated in the life of the organism. Phenotype implies geophysics; the earth is written into the organism, its body and mind. The self is a semantic self—it represents what lies (or lay) outside of it. It makes a statement (in the past tense).

Actually, the organism is doubly or triply semantic. First, and basically, we have the book of the dead (in Dawkins’ phrase): what the organism’s make-up tells us about ancestral worlds. Second, we have the genetic code with its information inherited from the past: textual DNA, the genetic archives. Third, and more limited, we have actual human language: its grammar and meaning. That too is a mental trait that should yield information about the past, specifically the time in which it originally evolved. One thing we can immediately infer is that human groups existed at this time (I am speaking of overt speech not a language of thought). Human speech evolved to aid communication, so there were people to communicate with. Speech implies groups. The presence of nouns and verbs surely implies that there were things to talk about and actions these things performed—scarcely a surprising result. Various ontological assumptions are built into language, so we can assume that the world satisfied these assumptions at the time language evolved (why would language make these assumptions unless they were true, or approximately true?). I need not spell these out. The human animal is multiply semantic (in a suitably broad sense) at different biological levels: body, mind, genes, spoken language. There is symbolism everywhere—worlds within worlds, text upon text.[2] We can use our language to talk about these other “languages”, which pre-date spoken language. Symbolism is nothing new, “books” are rampant. We “spoke” before we ever spoke. Animals are full of information, if we only know where and how to look. The selfish gene is a symbolic gene; the surviving (and reproducing) body is a symbolic body; the communicating human is a high-level symbolic operative. We (and other animals) are veritable libraries, stack upon stack of esoteric volumes, or commonplace announcements. We accordingly need to be interpreted, deciphered, translated. We are not semantically transparent. Our mind can be as obscure as our body (or our genes). Still, it is possible to read its hidden messages.[3]

[1] Interestingly enough, Dawkins doesn’t apply his theory to the mind, except for some stray remarks about fear of heights. I don’t know why; maybe it has to do with the “incorporeality” of the mind.

[2] These are not the only symbolic structures crowded into living organisms: there are also mental images, perceptual primitives, contents of desires and emotions, unconscious computations, mental models, signaling systems, and (according to some) immune systems. Each of these differs from the others. Really, living organisms are hives of symbolic activity, none more so than the human; the book of the dead is just one more to add to the list. The model of a physical machine does not do justice to this symbolic plethora.

[3] Some may say it is mere metaphor to call the body, mind, and genes repositories of language. We need not dissent from that, but it is purely verbal: true, other symbolic systems are not symbolic in the way human spoken language is, but then neither is human language like them. There are “languages of art” and “whale language” and “computer language” in that these comprise symbolic systems. There is no good reason to suppose that human language is somehow the measure of all types of symbolism. And what parts of human language—nouns, verbs, intonation, stress, pitch, pauses? There is a family resemblance between all these coexisting symbolic systems.

Share

On Cancelling

On Cancelling

Here is a thought experiment for you. Suppose your top ten philosophers had all been cancelled: removed from pedagogical employment, prevented from publishing, and generally shunned. This possibility could cover the philosophers of the last hundred years or of all time. Suppose that their thoughts had therefore never seen the light of day. Suppose too that no one else had ever had them. No Plato, no Socrates (who was rather drastically cancelled), no Aristotle, Descartes, Locke, Leibniz, Berkeley, Hume, and Kant (you can make your own list). Philosophy would have had a very different history. These luminaries might or might not have been guilty of this or that (e.g., blasphemous speech); the thought experiment is more piquant if we suppose they were not but that the spirit of the times required it. Do you think that would be a bad thing? Do you think an effort should have been made to mitigate the effects of their cancellation? Suppose they had all gone on to become rich successful men living happy lives, but unable to contribute to philosophy (they clearly had brains). Meanwhile other second-rate individuals formed the philosophical tradition, perhaps those most responsible for doing the cancelling. Nobody worries much about this, however, since the cancelled philosophers never had the chance to do their work and hence never became known as the great thinkers they could have been. Do you think this would be a tragedy for philosophy or just a negligible historical hiccup?

I have existed in a state of professional cancellation for over ten years now. Before that I had normal access to teaching positions, publishers, conferences, professional contacts, and so on. Not anymore. I contributed to philosophy continuously for nearly forty years. If I had been cancelled earlier, that would not have been possible—my teaching, publications, and professional activities would have been cut off even earlier than they actually were. Let’s suppose it happened before 1989 when I published “Can We Solve the Mind-Body Problem?”—that article would never have existed. My other books, articles, reviews, and lectures would never have existed. Do you think that would have been a bad thing or not? Do you think that if it had not happened, I would have continued to produce what I used to produce? Of course I would have. So, many things I would have done I have not been able to do—books written, talks given, students taught. All that has been expunged from intellectual history. That is the cost of cancellation (and it’s not just me). Do you think that is all fine and dandy? Have you been complicit in it? (I hope not.)

As it happens, I have not been completely silenced by cancellation, though certainly much muffled. Because I have this blog. If I did not, none of the ideas contained herein would have reached the minds of interested parties. It is sheer luck that I have this place to make my results public. There has been a concerted campaign to keep me from exercising my normal rights to communicate my ideas to others in the usual ways. I couldhave gone completely silent. I could have decided to admit defeat and simply given up thinking about philosophy and writing it. Then nothing of my thoughts over the last ten years would ever have entered the historical record. Those thoughts would never have existed in all probability. And don’t think I have never considered it—what is in it for me in the time and effort it takes to write these pieces? So, why do I write them? I could be having fun, playing tennis, kite surfing, throwing knives, making music, travelling, reading novels, living the life of Riley. I write them for you—for other people, for posterity, for the good of the human race. That’s why I do it. Fortunately, I am strong-willed and resilient enough to face down the cancellation, to disdain it, to rise above it. This is my gift to the world, my moral duty as a thinker. I believe these papers have intellectual value. I believe people benefit intellectually from reading them. I believe it would be a tragedy if I never bothered to produce them. That may sound immodest to some (“narcissistic”), but it seems to me the simple truth. Before cancellation I was a very successful and esteemed philosopher; that has not changed. If anything, I am a better philosopher than I used to be. I have not allowed my present cancelled status to deflect me from my intellectual calling. True, I write with bitterness in my heart, with anger and disgust, but I still do it. I couldn’t live with myself if I didn’t. I have never felt so altruistic in my life, so philanthropic, so generous (it’s not a particularly good feeling). Not a penny do I make from these writings, these uncounted hours of labor; not a promotion, not a pay increase, not even professional recognition. I do it because it is the best part of me, my God-given talent. I should be thanked for it, but of course stony silence is all I get from the American philosophy profession. Friends appreciate my efforts and so do many readers from across the globe who are not complicit in the cancellation (joyfully reveling in it in fact). I have not allowed the cancellers and their enablers to rob the world of the products of my labor, as I could so easily have done. I have continued to contribute to philosophy, despite the attempts to prevent it, successful as they have undoubtedly been. This is no mere thought experiment.[1]

[1] I could name many names, recount many incidents, indict many conspirators; but I will refrain from doing so. I think the bare facts speak for themselves. I do wonder if the responsible individuals even think about what they have done and are still doing; or is that too difficult? Notice the silence.

Share

On Writing

On Writing

When I was a professor of philosophy working in a philosophy department, I used to jot down notes on ideas I had that seemed promising. I was too busy to write up the ideas properly, so I made the notes as reminders for later work. The exigencies of teaching had priority (a great benefit of AI will be mechanizing essay grading). Sometimes months or even years went by before I could get back to these nascent ideas. Upon retirement (if we can call it that) I had time to revisit some of these old notes and convert them into papers. It wasn’t always easy, memory being what it is. But when I had new ideas, which I often did, I had the unbelievable luxury of being able to write them up straight away. At first, I simply let the resulting essays accumulate on my hard drive, expecting to publish them in some form. But they quickly began to proliferate wildly and publication became more of an issue (other factors were also involved), so I decided to put them on my blog. That way they could get out there instead of just lying around in my house. Since I was writing at the rate of about two papers a week, the passage of time led to the production of a great deal of material. I haven’t counted what I have written in the last ten years but I think it is on the order of around a thousand papers. That adds up to about ten substantial volumes—far too many for a university press. I now just think of my blog as where I publish. A bonus is that I get to write the way I want to write not the way editors want me to write (tediously at best). I’m also glad that my work goes out to the whole world not just to a limited number of English-speaking countries. According to my website (which I don’t run), the countries that visit my blog include the Philippines, Mexico, India, Pakistan, Norway, Sweden, Italy, and Germany—as well as America, Britain, Canada, and Australia (it varies from week to week). It’s like the Top Ten. I never intended to publish my work this way, but that is how it has turned out. What would I have done without the internet? I don’t even think of the journals anymore, or the university presses. It is a welcome freedom. I certainly like what I write more than I used to, because I am freed from academic conventions that impede good writing.

Share

Correlational Semantics

Correlational Semantics

I will describe some possible uses of correlational semantics. I don’t say I subscribe to these uses; I offer them as a gift to those with anti-realist or fictionalist yearnings in certain areas.  It may help ease some discomfort caused by such yearnings. Let’s begin with a relatively simple case: feature-placing sentences like “It’s raining”. The utterance of such a sentence can be true yet contain no reference to the place at which it is raining—for example, London. The sentence is not semantically comparable to “London is rainy”. The word “it” is not a referential singular term denoting London, even though the truth of the utterance depends on the fact that rain is falling in London. There is a correlation between uttering “It’s raining” while in London and the statement “Rain is falling in London”, but nothing in the former denotes London. The former is true in virtue of the latter but it isn’t about what the latter is about in the sense that it contains a term that refers to what “London” refers to: the two sentences don’t mean the same thing even at the level of reference. The phrase “It’s” is a kind of dummy subject expression, not a genuine referential term. Correlation is not same as denotation, even though truth may depend on correlation. We may say that our sample sentence alludes to its correlate (it presupposes a particular place, typically known to the speaker), but it contains no term that denotes that place. We know that it rains at places (where else might it rain?) and we assume that that is what is going on in the present instance, but epistemics is not semantics. We might even say that our sentence connotes a place where it is raining while not denoting any such place.[1] Places belong in its semantic periphery, so to speak. Now consider fictional names like “Hamlet” as it may occur in a sentence like “Hamlet is a prince”. That sentence seems true (Hamlet isn’t a pauper or a porcupine): we can insert it into the formula “s is true if and only if p”. Yet Hamlet does not exist, so (we might say) the sentence ought not to be true. What is it that makes it true? The obvious answer is Shakespeare’s intentions: he decided to create a character named “Hamlet” who was a prince. The name “Hamlet” doesn’t denote Shakespeare or his intentions, but anything true of Hamlet is due to Shakespeare’s intentions. There is semantic correlation but not semantic denotation. And anyone familiar with fictional names understands this kind of dependence-without-denotation; it is part of our linguistic competence. Thus, statements containing fictional names are true in virtue of the author’s intentions, which are correlated with the name; but they don’t refer to such intentions—quite possibly they refer to nothing. The correlation explains the truth of the statement, but there is no denotation relation connecting the two. Truth and denotation come apart; the former doesn’t require the latter. It is therefore possible to maintain both that the statement is true and that it has no reference, because reference comes from elsewhere, in the shape of a correlated statement that does refer. The connoted (not denoted) statement does the truth-conferring work, leaving the original statement to luxuriate in its non-referential indolence. We can combine fiction with truth by availing ourselves of correlational semantics. Otherwise, we are saddled with truth without reference, or no truth at all. Hamlet doesn’t exist, but statements about him can still be true, because “Hamlet” is correlated with Shakespeare’s intentions, which do exist. Thus, anti-realism does not imply lack of truth (or weird kinds of truth). The anti-realist does not have to deny the obvious.

Now that the form and point of correlational semantics is made clear, we can extend it to other areas. Some philosophers (named “positivists”) have maintained that theoretical entities are mere fictions: but then how can statements about them be true? Correlational semantics supplies an answer: these statements are correlated with other statements about existent non-theoretical entities such as experimental results, sense-data, retinal stimulations, or what have you. These entities confer truth even when the statement correlated with them refers to nothing real. Correlation steps in where denotation fails. Statements about electrons, say, can be true even if there are no electrons, because they are made true by non-electrons—non-existence is no bar to truth. We simply detach truth from reference and existence; and, indeed, there is nothing in the concept of truth itself to compel a rigid connection, because truth simply requires correspondence (or the possibility of disquotation) not reference and existence. No sentence is true but reality makes it so, but the sentence need not denote this reality. We can have a correspondence or redundancy theory of truth without building in reference in the sentence declared true; for we can appeal to correlated sentences or states of affairs. The concept of truth itself doesn’t even require a referential structure (grammar). Truth is more general than reference, more capacious. Similarly, it can be maintained that folk psychology refers to nothing real and yet its propositions are true, since they are made true by propositions about the brain.[2] It is true that I believe that snow is white even though there are no such things as beliefs, because that statement is made true by the condition of my brain as it controls my behavior. Thus, one can consistently be a mental eliminativist and also ascribe truth to mental propositions—rather like the anti-realist about fictional characters. For there is something other than mental reality that can confer truth on such propositions; and surely it is true that I believe that snow is white (even though that phrase refers to nothing, according to the eliminativist). Correlational semantics allows you to have it both ways (if both ways appeal to you). Correlation not reference; connotation not denotation.

I suspect the philosophers most well disposed towards correlational semantics will be ethical expressivists. Their official view is that ethical sentences do not report facts, are not true, and cannot be said to be known; there are no values “in the world”. Or better put: since there are no values in the world, ethical utterances cannot be true, even though they appear to be true. Wouldn’t it be better to accept that they are true but that they don’t denote values-in-the-world? That is what correlational semantics allows: expressivism combined with truth. Suppose an experience of pleasure occurs and someone remarks “That’s good”: the expressivist says that no evaluative property is thereby ascribed to the pleasure, since there are none such; instead, the utterance is like an outburst of emotion or an order to act in a certain way. But there is an alternative story: the utterance is true in virtue of the existence of the pleasure but no property is ascribed by the word “good”. The word isn’t even a predicate (in some versions of the doctrine). The sentence is not true in virtue of a denoted evaluative property but in virtue of the correlated pleasure property—correlation not denotation. The world does contain pleasure and it makes such propositions true, but evaluative discourse does not refer to this property. The evaluative force comes from the attitude expressed not from the state of affairs that makes the sentence true. Thus, truth is compatible with expressivism concerning value; we are not forced to reject ethical truth just because ethical propositions don’t refer to ethical properties. Put differently, the truth-makers belong to the supervenience base not the supervening values (they are add-ons derived from human psychology). Again, I am not saying I accept this position, only that it exists in logical space and has attractions for a convinced expressivist, particularly one who refrains from withholding truth-values from ethical statements. Something objective is involved in making ethical statements true, but it isn’t the denotation of ethical words; rather, it is what those words connote in the way of correlation. We know these correlations, so we accept the truth of what is said; but this doesn’t entail that the correlated facts are denoted by moral terms—they are not. Accordingly, we get to be moral anti-realists and accept objective ethical truth-makers. Everyone is happy (well, not everyone). Some words don’t denote anything real, but that doesn’t stop them appearing in true statements, thanks to suitable correlated realities. Only referentially empty sentences with no correlates will turn out not to be true or be incapable of truth, such as ungrammatical or meaningless types of sentences. The sentence jumble “It thing number gone” is not true and has no true correlate; nor does “All mimsy were the borogroves”; nor “Colorless green ideas sleep furiously”. But it is possible for a sentence to contain empty terms (no denotation) and the sentence be perfectly true; it has, we might say, parasitic truth. Correlational semantics is designed to accommodate these cases. We must give up the dogma of denotational truth.[3]

[1] See my “On Denoting and Connoting”.

[2] See my “Semantical Considerations on Mental Language” and “Ontology of Mind”.

[3] There are other reasons we might want to give up the dogma of denotational truth: a devotion to hard redundancy theories, adherence to coherence or pragmatic theories, skepticism about the notion of reference (itself having several sources). The point I would emphasize is that nothing in the concept of truth analytically implies that true sentences or propositions must have a referential semantics: truth does not require referential relations between sentence parts and whatever in the world makes the sentence true. In principle, a sentence could have no such structure and still be true. Some linguists and others have toyed with the idea of a pre-referential level of meaning in the child’s understanding of language (“RED!”, “COW!”, “MAMA!”); such a level would not preclude application of the concept of truth.

Share

Are We Animals?

Are We Animals?

I am interested in the concept animal, its analysis and role in our thinking and acting. I am also interested in the use of the word “animal”, its denotation, connotation, conversational implicatures, psychology, and sociology. These interests have a bearing on the ethical treatment of animals and on the nature of human intelligence. For most of human history it would have been denied that we are animals (“beasts”), mainly for religious reasons, but Darwin initiated a movement that denies this denial. For all intents and purpose, we are rightly classified as animals, though we do not always talk that way (ordinary language has not kept up with biology). The reasons for this are threefold: we are one species among others; we evolved from animals; we are similar to animals physiologically. How could these things be true and we not be animals, though no doubt exceptional animals. We are not plants and we are not gods; we are animals like other animals. That is our similarity-class. That is our taxonomic category. I take it this would now be generally accepted, if not warmly welcomed. But it would be fair to say that it has not completely sunk in; the culture has not fully absorbed it. Consider the following sentences: “I am an animal”, “You are an animal”, “She is an animal”, “Queen Elizabeth II was an animal”, “Jesus Christ was an animal”. These may all be literally true, but their connotations and implicatures prohibit their utterance in normal circumstances—they may be regarded as insulting, impolite, blasphemous. We don’t like the sound of them. They suggest dirty habits, aggressiveness, hairiness, lack of intelligence. They sound degrading. Why? I think it is because we have three main characteristics that set us apart from other animals: we live in houses, we wear clothes, and we speak. We are not unclothed, live in the wild, and bereft of speech. To this the obvious reply is that we are animals distinguished by the possession of these traits—we are exceptional animals, but still animals. Whales are also exceptional animals, but still animals. If they think of themselves in relation to other species (and I don’t doubt that they do), they probably regard themselves as a cut above, not as other species, not just animals. They don’t like to be classified as belonging to the same group as those creatures (“beasts”). They don’t care for the association—just as we don’t. We don’t like the label. But both species have to admit that their kinship with other creatures justifies using the same term to cover them. We are animals reluctant to be called “animals”.

The OED provides an instructive, if not entirely satisfactory, definition of “animal” that is unusually long: “a living organism which is typically distinguished from a plant by feeding on organic matter, having specialized sense organs and nervous system, and being able to move about and to respond rapidly to stimuli”, adding the codicil “a mammal, as opposed to a bird, reptile, fish, or insect”. The word “typically” is inserted to prevent counterexamples involving slow animals and fast plants, or the possibility of plants with eyes or ears, or insect-eating plants. Also, in their zeal to distinguish animals from plants the authors fail to provide a sufficient condition that distinguishes animals from gods or other supernatural beings (surely gods can eat, have sense organs, and can respond to stimuli). The codicil is interesting but puzzling: are the authors supposing that only mammals are animals? That is not zoologically orthodox and I would say plainly false, but it is not merely bizarre, because we don’t tend to use the word “animal” in application to these zoological groups. Why is this? I think there are two sorts of reasons: reptiles, fish, and insects are coldblooded; and birds resemble us in important respects, at least so far as folk zoology is concerned. Being coldblooded sets some animals apart from other warm-blooded animals, so that we need a subdivision in the total class of animals; we thus avoid calling the coldblooded animals by that name, while not denying that they are animals. In the case of birds, we recognize three features of them that bring them close to us: they build nests and live in them; they sing; and they have attractive plumage, rather like clothes. It is therefore felt to be demeaning to call them animals, as it is felt to be similarly demeaning to us. We are both animals with a difference, superior to other animals, supposedly (we are very fond of birds).

It is the human body that invites the appellation “animal”—its similarity to animal bodies generally. Our anatomy and physiology resemble those of animals already so called. Clearly, our bodies derive from earlier animal bodies; we might well be prepared to accept that we have an “animal body”, even before Darwin came along (but a non-animal soul). Hence the body is deemed a source of degradation, shame, mortality. It is not the human mind that encourages calling us animals; it isn’t flamboyantly animal in nature. If we didn’t have animal bodies, but supernatural or robot bodies, we would not describe ourselves as animals like other animals. Is it correct to say that we have animal minds? That is not such an easy question: for our minds are not indelibly imprinted with animal characteristics. On the one hand, we have minds far superior to any animal mind in certain respects: art, science, technology, music, literature, courtly love. On the other hand, our minds are in part clearly shaped by our animal body: hunger, thirst, fear, pain, lust. What an animal wants and feels is a function of its body type. Perhaps our psychological kinship with other animals might tip us off to an affinity with other animals and suggest continuity with them, but it is not so salient as the body. It would not fly in the face of the facts to say that we don’t have an animal mind, though we do have an animal body; at most our mind is partly an animal mind (but completely an animal body). This would justify the protest that I am not (completely) an animal, because my mind transcends anything in the animal world (my body, however, is stuck in that world). The correct form of statement would then be “The Queen’s body is wholly animal but her mind is only partly animal”. Does that sound a bit less discourteous?

How does all this bear on the two questions I mentioned at the beginning? First, it is difficult to defend speciesism once we humans are declared animals too; there is then no sharp moral line between us and the animals we mistreat. It doesn’t sound terribly convincing to say that we have a right to abuse other animals but not the animals we are. Why should we be treated with kid gloves if we can abuse and exploit our animal kin? “All animals are created equal” should be the maxim of the day. Mere species difference shouldn’t trump animal continuity. Second, full recognition that we are animals too, derived from other animals, should undermine claims of unlimited intellectual capacity: for animals are not generally omniscient. True, our minds are superior to other animal minds (in certain respects) but they are still the minds of an animal. We should therefore expect cognitive limits. The general lesson I would urge is that the word “animal” must cease to have negative connotations, so that no unease is produced by calling queens (and kings) animals. We should be able to say “Your royal animal highness” and not be accused of offenses against the monarchy.[1]

[1] A little anecdote may shed light on the origins of this paper. The other day I was feeding my pet tortoise and I noticed its tongue as it ate. It was small and pink, remarkably like a human tongue. I reflected: I am an animal too, just like you. I don’t think this is an easy thought to have, given the chasm we tend to set up. I wonder if any animal really thinks of itself as an animal. It doesn’t seem like something to be proud of (unlike species identity: does any animal feel itself to belong to an inferior species?). I myself am quite happy to call my tortoise my biological brother.

Share