Digestion, Learning, and Dreams

Digestion, Learning, and Dreams

The word “digestion” (or “digest”) is apt to suggest to modern ears the process by which food is converted to bodily tissue. But it also has another meaning: the process by which information received by the senses is converted into knowledge—as in “That’s a lot (of information) to digest”. The dictionary duly records both meanings: “break down (food) in the stomach and intestines into substances that can be used by the body”; and “understand or assimilate (information) by reflection” (Concise OED). For the latter we also have: “Settle or arrange methodically in the mind; consider, ponder”, and “comprehend and assimilate mentally; obtain mental nourishment from” (Shorter OED). These linguistic facts prompt a conjecture, namely that the two processes are significantly similar. Information digestion is like food digestion (and vice versa). The stomach digests food and the brain digests information: the brain is the “stomach of the mind” (nice slogan). Lest we suppose that the physical use of the word is primary, we should note that the word “digest” comes from the Latin word digesta meaning “matters methodically arranged”. Thus, the mental use is etymologically primary and very probably precedes the physical use: people knew the mind digests information before they discovered that the body digests food (it was a significant discovery that what goes in at one end comes out at the other). We all know that perceptions become memories, but it isn’t obvious a priori that food produces feces. How should we characterize what is in common between the two processes? Reduction, conversion, and assimilation: the original input is reduced (condensed, concentrated), transformed into something different (a type of metamorphosis), and integrated into the substance of the organism (body or mind). There is a sequence of stages whereby the material taken in is incorporated into the life of the organism, physical or mental. Tissues or memories. It isn’t just a transfer of stuff from one location to another; there is an active process (a procedure) of breaking down, selection, and eventual incorporation. Some of what comes in is discarded as unhelpful, unnecessary, dispensable. Some is nutritious but some isn’t. The input is operated on with a goal “in mind”: to benefit the organism as it makes its way in the world. Junk is eliminated; only the good stuff remains. The organism digests the world in two ways: physically or mentally. The stomach is the brain of the digestive body, while the brain is the stomach of the digestive mind. These are the main organs whose job it is to render usable the various inputs to the organism, whether in the form of food or information. We may thus speak of “somatic digestion” and “cognitive digestion”. Both deserve the label and both fall under the same general conception. Animals are digestive creatures by nature, and have been naturally selected accordingly.[1] Digestion itself is a selective process, sorting the wheat from the chaff—what will help the genes or hinder them (to adopt the genes-eye view of evolution). The typical animal needs both to survive and reproduce—physical nutrition and mental nutrition. Human life, in particular, is digestive life—whether eating or learning, dining or being educated. This principle unifies the functions of the animal in question. To give a salient example: the child needs to digest food in order to grow in size, but she also needs to digest auditory information in order to grow linguistically. The adult body is a result of food digestion; the adult mind is a result of information digestion. Call this biological naturalism if you like, I would not spurn the label.

We should not draw too sharp a line between the two types of digestion (both properly so called). The digestion of information proceeds via physical inputs to the senses and results in tissue rearrangements in the brain (it may even promote size). Energy is taken in and transformed. The digestion of food is a “smart” operation—we may speak of the intelligent stomach. The stomach (and the rest of the gut) has to make “decisions”—what to pass on, what to reject, what kind of secretions to authorize. Not for nothing do gastroenterologists speak of the “second brain” located in the gut.[2] The system is informational not merely “mechanical”. The early stages of food digestion explicitly involve conscious decisions about what to eat (ditto for the output end). The mind is “physical” as the gut is “mental”. Let’s not erect a dualism of digestive apparatus (machines versus soul, etc.). Sense organs and mouths are not that different, abstractly considered. Food may be “raw” as sense data may also be (the “given”). We ingest food and we ingest information—we “internalize” them both. We can find food indigestible, but we can also find that too much information (or the wrong kind) is similarly indigestible (“It’s going to take a while to digest all this”). Reading is a type of grazing (or surfing the Web); eating can be educational. There is junk food but also junk information (sic). There are things we would rather not eat and things we would rather not know. Food may be found disgusting, but so may reported facts. Proust’s madeleine was both gustatory and psychological—food eaten and experience remembered. You are what you eat and also what you learn. Your mind is the result of experiences digested (along with innate mechanisms) and your body is the result of foodstuffs digested (along with innate mechanisms). Memories are the tissues of the self, as tissues are the memories of what has been consumed. We consume the exogenous in order to build the endogenous, thus constructing the whole animal. Ultimately, it’s all energy transfer in the service of life promotion. We have greedy eyes as well as greedy mouths. Absorption is the name of the game. Plants digest light via photosynthesis: this is the prototype of all life, from gut to brain. Food indeed comes from light (via the intermediary of plants), but so does visual information. Animals consume light in two ways. We eat the Sun—with our mouths and eyes. We can imagine creatures that build tissue by absorbing the energy contained in visible light, and also creatures that learn about the world by eating it. There could be light (=photon) eaters and food learners (they derive their science from information conveyed by food). In fact, terrestrial life is already a bit like that. We may be hungry for knowledge, starved of an education, or informationally overstuffed. Then there is the old saw “food for thought”—information received that prompts reflection. If you become a chef, most of the information you digest is about food—that you also digest. Your brain is then full of digested knowledge about what your body has digested. School lessons are school lunches, so they had better be tasty and nutritious. The two concepts run naturally together.

Are you getting the hang of the digestive theory of learning? Have you digested its motivations and point? I hope no dyspepsia has resulted (as opposed to euphoric eupepsia). Because we are now going to go rogue—we are going to the outer limits of advanced digestive science (“digestology”). First, we will consider cooking and teaching; then we will turn to the doozy of dreams. Cooking is a preliminary to mastication—what you do to food before you get your teeth into it. It is really extended digestion: doing with the hands and fire what your mouth does with teeth and saliva and your stomach with bile. Food preparation is extra-bodily digestion. You might as well be spitting into the food before you put it in your mouth (that’s what gravy is all about). Now the point I want to make is that teaching is much the same: it processes information into a form that is easier for the student to digest. Preparing a lecture is like preparing a meal—acting on the material so as to make it more palatable. Lecturing is putting the stuff into the student’s mental mouth and hoping he or she will swallow it, even enjoy it. You cut the material into bite-sized pieces, trying not to give the students more than they can chew on; perhaps you garnish it with humor; you don’t make them consume junk. An assertion is a morsel of food, lovingly prepared: the speech act is a culinary act. You can be a good intellectual cook or a lousy one. A good lesson is like a well-balanced meal (I myself like to cook and teach). Both are facilitations of consumption. I like to eat while watching Jeopardy, as I fortify body and mind (the latter is light eating). Teaching consists of assertions prepared for mental consumption; it takes skill and judgment. You have to know your audience—their culinary and pedagogical preferences. Don’t overegg the pudding; don’t undercook the chicken. Leave them satisfied but wanting more. Be a good intellectual chef. Feed their brains nutritious cognitive food. Introduce them to new tastes. There is no need to let them see your kitchen (that can be a mess), but remember that the lecture hall is a restaurant—of the mind. The teacher is part of the student’s extended digestive process of learning about the world.

I promised you dreams. Then dreams you shall have. It has often been wondered what the function of dreams is, biologically speaking. They seem pointless, unhelpful, yet people suffer when they are disrupted (animals too). Can the digestive theory of learning shed any light on the matter? Dreams often reflect past experience, sometimes with a delay built in; they contain remnants of waking life. They have also been supposed to play some role in information processing—perhaps leading to problem-solving and creativity. Yet they seem to be junk for the most part, just random noise, and not even very pleasant. Are you thinking what I’m thinking? That’s right: dreams are the feces of the mind! They are by-products of cognitive (and affective) digestion—what is left over when the digestive mental process has done its work. That process is selective, retaining and discarding; it takes in more than is required, because useful knowledge doesn’t come pre-packaged. It is a complex process, carried out unconsciously, and shrouded in mystery. Surely it will leave some stuff behind—similar to what came in initially and similar to the final end-state (knowledge, memory, emotional equilibrium), but also significantly different. Feces are like that—recognizable yet transformed. They bear the marks of their history (books, in effect). Likewise, cognitive digestion will have its waste products. Well, dreams are like that: they are natural by-products of the selective and reducing process that converts raw information into settled useful knowledge. People sometimes speak of “brain farts”, meaning random products of rational brain activity; dreams are like that—mental flatulence produced by the arduous process of information digestion. You might reply that feces and dreams differ in an important respect, namely that feces are just (as it were) crap while dreams at least have some point or meaning—they aren’t just pure shit. But we must not disrespect the excremental: it too isn’t pure crap. Feces serve to remove not just food waste (like industrial waste); they also convey dead cells and dead bacteria. They aid the health of the body, especially the intestines. They do some good; they aren’t just useless gunge. Shit isn’t just shit, oh no. It may not be manna from heaven, but it isn’t utterly devoid of meaning and purpose. So, it is closer to dreams than a more disrespectful attitude would suggest. Neither dreams nor feces are the animal at its most sublime and practical, but they are not as worthless as might be supposed. How and why dreams get to be as they are is hard to say, but it doesn’t sound wide of the mark to think of them as inevitable by-products of the cognitive digestive process (possibly not only that). The brain dreams of what it discards as it goes about its work of knowledge absorption. Feces consist of pulverized food, infused with this and that; dreams consist of pulverized experience, infused with this and that (sexual desire, according to Freud, primordial myths according to Jung). Dreams are pretty mysterious content-wise, but functionally they seem like bits of mental debris left behind by a more vital function. Given the digestive theory of learning, it is not implausible to see them as at least analogous to feces and farts. They are generally disagreeable like them, though they have their pleasurable aspects (as Freud insisted, exaggerating tremendously).

I cannot resist mentioning another provocative (repellent) application of the digestive theory of the learning mind. It seems that consciousness plays a role in the learning process, obscure though that role may be: some types of learning require it, or at least proceed better with it (e.g., book learning). Is there any analogue in the bodily digestive realm? Consciousness is supplied by the organism not by the external world—it comes from internal physiology not from the environment. What does that remind you of? Saliva, of course. Saliva mixes with food and makes it easier to swallow and digest; it’s hard to eat without it. Consciousness mixes with sensory data (nerve stimulations and whatnot) and makes cognitive digestion easier; it’s hard to learn without it (though not impossible). So, consciousness plays a role like that of saliva. Again, let us not disrespect the salivatory—it plays a vital role in eating, which we normally take for granted (if it dries up you are in big trouble). Nor is saliva simple—its evolution would make a great story. I do not wish to downgrade consciousness by comparing it to saliva; on the contrary, the comparison makes us acknowledge the necessity of consciousness even more. There is nothing epiphenomenal about saliva! Once consciousness has done its early work, the ingested information makes its way unconsciously through the digestive system, eventually becoming an item of settled knowledge. Saliva presents food to the gustatory digestive system, if I may put it so, and consciousness presents sensory stimuli to the cognitive digestive system. These are precursors to the further work of rendering the data suitable for cognitive storage or food suitable for tissue augmentation. Consciousness is the saliva of the mind, so to speak. It makes cognitive digestion possible (or greatly facilitates it). If we think of the mind as (partly) a device for internalizing the external world[3], with consciousness as a component of this device, then it is natural to view it as essentially digestive, in the same sense (but not the same way) in which the stomach and bowels form components of a digestive system. Not a telephone exchange or a computer but a belly (with suitable appendages): that is the image to take away from this leaping discussion.[4]

[1] It would be wonderful if we could argue that cognitive digestion descended from somatic digestion, as a deployment of the basic digestive plan. A mutation of a preexisting gene complex for somatic digestion led to genes for cognitive digestion. But it is hard to see how such an evolutionary story would go. To be sure, the digestive system is an adaptation that has been around for a very long time, so there has been time enough to give rise to a cognitive counterpart: but what kind of modification could lead from the former to the latter? Perhaps the abstract structure of somatic digestion could be repurposed for cognitive digestion by some amazing genetic fluke. Anyway, the idea is worth exploring.

[2] See Michael Gershon, The Second Brain (1998).

[3] See my “What the Mind Does: Internalization and Externalization”.

[4] I am tempted to comment on the content and tone of this discussion—I am well aware of its peculiarity and perlocutionary effect. But on second thoughts I don’t think I will. I leave that to the intelligence of my reader.

Share

Defining Time

Defining Time

Can time be defined? Einstein and Bergson had an argument about this: Einstein claimed to define time by clocks (“time pieces”), i.e., by physical objects of a certain type; Bergson preferred to define time by means of consciousness of time (“subjective duration”). Time exists by virtue of clocks, natural and artificial, for Einstein; or it exists by virtue of human experience, for Bergson. Who is right? Neither, in my view. Both are wrong, and for the same basic reason: they both try to define time in terms of human knowledge. Einstein uses our means of measuring time—periodic processes like the earth’s rotation or oscillations in a quartz crystal[1]; Bergson uses our “lived experience” of the passage of time. One method is physical, the other phenomenological. Both are epistemological, either external or internal. There is an obvious problem with this method: clocks and consciousness are both in time. They are subject to time, existing in time, temporal phenomena. How then can they define time—any such definition is circular. It’s like trying to define the time of Australia by the time of Denmark: both are instances of time. You might as well define the time of clocks or experiences in terms of the time of what they measure or are experiences of. How long does it take for the clock hand to go all the way round once? How long does it take to see the sun set? These are events in time, like the sun traversing the sky or an eclipse. And how do we define the time of these events—clock movements and experience durations? Not by means of further clocks or conscious experiences—that way infinite regress lies. Moreover, it is clearly wrong to suppose that time depends on clocks or experiences: time would exist even if no clocks or experiences did. In fact, clocks and experiences presuppose time—they could not exist without it. The dependence goes the other way. There had to be time before clocks and experiences or else these things could not exist. Human knowledge of time presupposes that time already exists, because knowledge is a temporal matter—observing clocks, listening to a symphony. It is thus impossible to reduce time to knowledge of time, since knowledge is a temporal affair. It’s like trying to reduce space to measuring rods or awareness of space: measuring rods exist in space, as do awareness-producing brains. This is tantamount to trying to reduce space to one instance of it, as if the space of a football field can be defined by the space of a foot ruler. Circular! Also, very implausible, since football fields can exist in space in the absence of foot rulers (or other measuring devices) and perceptual experiences. This is just crass verificationism, leading inevitably to idealism. That’s why I say Einstein and Bergson are both wrong.

You reply: but how else are we to define time? How indeed. The fact is no other method suggests itself. Our conception of time is ineluctably anthropocentric. But that doesn’t show that time itself is anthropocentric, intrinsically, constitutively. Clocks and consciousness reveal time only partially, if at all; and its appearance to us may not tell us much about what it is in itself. We cannot picture it, imagine it, compare it to anything else. Efforts to reduce it to space are obviously futile. Empiricism is defeated by time. Rationalism can only take us so far—the mathematics of time not its concrete (sic) being. Let’s face it: we are pretty effing blank about the nature of time. We have some idea of its structure but not of its substance (and the notion of substance seems singularly inappropriate). Time flies too low for us to see it; or too high—way out of sight. It is everywhere but nowhere, a condition of perception but imperceptible. By all means employ a notion of time appropriate to the purpose at hand (physics, phenomenology), but don’t think you have got hold of the thing in itself. Time isn’t even a mystery in the usual sense, because we have so little handle on it to begin with—unlike consciousness or matter. Precisely what is mysterious? Time, you say—but what is that? We can’t even properly specify the thing that is so inscrutable—we can only gesture, waffle, then fall silent. Time reduces us to inarticulacy. It isn’t something that we know very well but can’t explain; it is hardly known at all, save abstractly. We are not acquainted with it—like redness or pain or shape. Nor is our ignorance of it like our ignorance of anything else; even the word “ignorance” fails to catch its degree of elusiveness (we know much more about God or black holes or dark matter or the universe before the big bang). True, I know what time it is and how long it takes to boil an egg, but I haven’t the foggiest idea what these words really mean—what I am referring to with them. Do I ever really refer to time as I refer to people and places—or is this an overgeneralization of the concept of reference? Perhaps I just obliquely allude to time, or attempt to intimate it, or vaguely hint at it. The philosophy of time is a philosophy of something I can’t even specify. Hence all the truly pathetic attempts to pin it down: rivers, arrows, fathers, sands, valleys, waterfalls, storms (okay, I made up the last three). We don’t even have any good metaphors! We just have irksome bafflement, visceral vacuity, intellectual scotoma. We can only bear to think about it every now and then; or else madness threatens. The topic is infuriating, excruciating, and ultimately empty. It’s a problem about nothing, or nothing we can put our finger on. That is why it invites the kind of treatment proposed by Einstein and Bergson—at least we know what clocks and consciousness are. We are not stranded in an intellectual desert, featureless, arid. We can lean into the mirages that time produces in us. Time isn’t just indefinable; it is incognizable.[2]

[1] Do we ever really measure time? It is tempting to see our clocks as analogous to measuring rods, but there is an important difference: we lay measuring rods next to the things we measure with them, but we don’t lay our clocks next to time. We suppose there is a correlation between clock movements and temporal intervals, but we never observe this correlation; it is more a matter of faith. This is why we can entertain such possibilities as that time may speed up while clocks fail to register this fact—the sequence of clicks doesn’t track the actual passage of time. Our clocks are really substitutes for direct observation of time’s passage. The concept of measurement seems overoptimistic.

[2] Time has been a concern of poetry, precisely because it refuses to yield to more “scientific” treatment. But even the poets are defeated by the topic, managing only to lament its passage or rue its authority. The topic of death is never far from the topic of time. Analytic philosophy keeps a safe distance from it or bends it into something more tractable—or else you end up writing as I have just done. Lyrically, pretentiously, despairingly—hardly the stuff of a passable PhD or an article in Analysis. And yet I do have the feeling that time may one day be penetrated, laid bare, by some stroke of genius. It really ought not to be so impenetrable. What prevents us knowing about it? It doesn’t hide behind an opaque screen…

Share

Degrees of Intention

Degrees of Intention

Does intending come in degrees? The question seems odd and to admit of only one answer—it does not. We don’t talk this way (“I half intend to drink a beer”, “Do you strongly intend to go to the shops?”). Intending is like knowing: you don’t weakly or partially know a fact—you either know it or you don’t. Both are like ordinary physical states of affairs: things don’t insipidly fall or are wholeheartedly square. When you intend to do something, you are committed to doing it, no ifs or buts; intending doesn’t come on a scale of intensity. It isn’t like pain or pleasure, which do come in degrees, as ordinary language attests and introspection reveals. Intending is an all or nothing thing. So it seems anyway. Yet a doubt can be raised and a puzzle produced. For belief and desire do come in degrees, and intention is intimately bound up with them. I might weakly desire to drink a beer and strongly believe there is no beer in the vicinity, so I refrain from seeking out beer (too much effort, too little reward). Or I might have a passion for pineapple and feel very convinced there are pineapples hereabouts; I head for the nearest pineapple, eagerly chomping it down. Do I have the same measure of intention in both cases? Don’t I weakly intend to look for a beer but strongly intend to apprehend a pineapple? Isn’t that what we would expect given my beliefs and desires? Intentions are a product of beliefs and desires, so shouldn’t the former inherit the properties of the latter? Shouldn’t gradation characterize both? It looks as if my reasons differ in strength in the two cases, so intentions ought to. What if we introduced a word “intending*” that is stipulated to mean “volitional mental state supervenient on beliefs and desires”—wouldn’t that designate the same mental state as “intention”? If intentions were constituted by beliefs and desires, they would certainly have all the properties of beliefs and desires, including the property of being graded. Does ordinary intuition refute such a thesis? Maybe we need to revise our intuitions in the light of the intimate connection between intentions and beliefs and desires (reasons). We also know that it is not possible to intend what you believe to be impossible, so shouldn’t very weak belief in the possibility of the action reduce the strength of the corresponding intention? Ditto for desire: we can’t intend what we have no desire to do (taking desire in the widest sense), so shouldn’t intention be attenuated by a weakening of desire, vanishing when desire does? From a theoretical point of view, degrees of intention make sense, given the psychological antecedents of intention; we really should view intention in that way. Perhaps this is one of those cases in which ordinary language is misleading. So, after all, intentions do come in degrees of intensity, right?

Still, the contrary intuition is stubborn, and not to be put down to mere conversational implicature. It isn’t just that saying “I somewhat intend to go to the gym today” conversationally implies that I probably won’t go; it seems to be in the very nature of intention that it knows no degrees. It clearly isn’t the same as belief and desire in the respect in question: what would a feeble lukewarm intention even feel like? Compare decision: what does it mean to say that someone only weakly decided to go swimming? Isn’t decision inherently all or nothing? You don’t “make up your mind” only partially or to some degree. We thus seem pulled in two directions. Why is this? I think I know why: it is because of the nature of action. Intention mediates between reasons and actions, with reasons (beliefs and desires) admitting of degree: but actions don’t admit of degree. You either do it or you don’t. There are no degrees of drinking a (whole) beer, as there are no degrees of slipping on a banana peel or splitting the atom. It happens or it doesn’t. Reaching your thirtieth birthday isn’t a matter of degree, and neither is banging a drum thirteen times. Actions are all or nothing, like events in general. They are not like beliefs and desires (or pleasure and pain): they don’t start from zero and then ascend upwards in intensity. You can’t measure their strength on a scale. But intentions lead into actions, partaking of their black and whiteness. Actions are an on-off matter, and so are intentions, since intentions are intentions to act. Intentions thus begin in mental matters of degree and culminate in sharply defined events. They have a foot in both camps; they face in two directions (a la Janus). Intentions are mongrel, hybrid, mixed up. This explains our uncertainty about their status: we can look at them from two angles, seeing them in a different light from each angle. Now they look like things with gradation built into them; now they look like things that know only on and off. As effects of beliefs and desires, they vary in intensity; as causes of action, they come in only two varieties, operative or inoperative. This makes them conceptually peculiar, even puzzling. How can they be both graded and ungraded, continuous and discrete? They seem unclassifiable. They seem mysterious, steeped in ambiguity. I think they have always seemed elusive to philosophers (and psychologists), which is why they are generally passed over in favor of beliefs and desires or overt actions. It seems less mysterious to talk of intentional actions than of intentions per se. This is not an oversight but a principled policy (possibly unconscious). Intentions really are hard to understand. We can’t even decide whether they come in degrees! Perhaps the concept should be split into two to reflect the nature of the items designated: intention1 is the upshot of beliefs and desire; intention2 is the immediate trigger of actions. Intentions1 come in degrees of forcefulness; intentions2 either operate or don’t operate, and don’t admit of degree. As the intending process nears the point of action it loses its variability and hardens into a simple on-off switch. It ceases to come in degrees and stiffens into a rigid rule. Instead of sliding up and down a scale it settles on a fixed value that tolerates no uncertainty. It solidifies into action.[1]

[1] The resort to physical metaphor is entirely predictable: we don’t know what we are talking about so we take refuge in the nearest metaphor to hand (not that this is a bad thing as long as it is recognized for what it is). Intentions are among the most inscrutable things in the mind.

Share

A Puzzle about Desire (and Intention etc.)

A Puzzle about Desire (and Intention etc.)

In “A Puzzle about Belief” Kripke introduces his puzzle about belief as a puzzle about belief—specifically, the behavior of names in belief contexts. I will contend that it is a not a puzzle about belief specifically and not about names specifically; that is just one version of the underlying puzzle. Kripke’s protagonist Pierre first learns about London (“Londres”) from a book, later visiting that city and learning English. He forms contradictory beliefs about the attractiveness of London from these two sources. But he could have formed these contradictory beliefs from the same kind of source: he could have read two books, one in English and one in French; or he could have paid two visits to different parts of the city. The puzzle has nothing essentially to do with testimony-based belief and observation-based belief (not that Kripke says it does). For simplicity, let’s assume the beliefs are formed from reading two books each ascribing different properties to London. Suppose Pierre forms the desire to visit London (“Londres”) by thus reading about it, but that he also forms the desire not to visit London by reading another book in English (the first book talks mainly about Kensington, the second about Hackney). He desires to visit London and he desires not to visit London. The same kind of disquotation principle applies to desire as to belief (“I hate London”, “J’aime Londres”). Accordingly, Pierre has contradictory desires. The same goes for intention, obviously enough: Pierre may intend to visit London and intend not to visit London, depending on the information he acquires. The names “London” and “Londres” feature in his vocabulary and they can generate the same result as they do for belief. It isn’t the concept of belief that gives rise to the puzzle; it’s the way names interact with propositional attitudes in general (but see below).

Does the puzzle arise only in the case of names? Apparently not: we can generate the same kind of puzzle using demonstratives or pronouns. Pierre may express himself by saying “That city is attractive” (in French) and “That city is not attractive” (in English), referring to London both times, while not knowing this. Indeed, we can get the same result even if he is monolingual. We can copy his linguistic preference by reporting him as believing that that city is attractive and also that that city is not attractive, unknowingly pointing at the same city twice (same for “it”). So, names are not essential to the puzzle either. Nor need reference to a particular object be part of the story: Pierre could have contradictory beliefs about a natural kind (e.g., water) or even about a physical magnitude (e.g., a mile). All he needs is two words in different languages (or the same language) associated with different bodies of information. So, it is not strictly accurate to say, as Kripke does, that the puzzle concerns “the behavior of names in belief contexts”: it is more general than that both with respect to belief and names—better to say, propositional attitudes and reference more generally. Nothing specific to belief or names is raised by the underlying puzzle.

Can the puzzle be generalized even further? Is language even necessary? I think not: Pierre could wander around a district of London one day and think that the city of which it is a part (“this city”) is attractive, while the next day forming the opposite belief while wandering around a different part—not realizing he is in the same city. Similarly for desire—he desires to stay in the first city but not the second, as he would put it. He need not express his beliefs in a public language, simply forming them without speaking. He need not even be able to speak, having never learned a language. A speechless animal could likewise form contradictory beliefs, as long as they are formed from different bodies of information. It isn’t language as such that is generating the puzzle; rather, it is propositional attitudes considered in themselves—desires, intentions, hopes, regrets, etc. We might even say it is a puzzle about concepts. No disquotation principles are needed to get it going, let alone proper names.

What about perception—can it generate the puzzle? I don’t see why not, though we might need to exercise more ingenuity to find a convincing example. Take someone looking at a tomato and believing it is red. Unknown to him, there is a mirror in his visual field reflecting that tomato, but cleverly disguised to give an impression of greenness. He accordingly believes that tomato not to be red—even though it really is. If he gives the tomato two names, under the impression that he is seeing two objects, he will commit himself to a pair of perceptual beliefs that are contradictory without realizing it. He has contradictory visual impressions: it seems to him that what is in fact a single tomato is red and not red—as we would put it, but not he. Or we could have an example in which an object seems square visually but seems oval tactually: the subject perceives it as square and at the same time as oval—his perceptions contradict each other (though he fails to see that). Or suppose an animal espies a potential predator and has the impression of a scary animal over there, but also sees its reflection in water and has the impression of a harmless animal (it seems to be on the point of drowning). The same animal seems to be both dangerous and not dangerous, and this seeming may not be a case of belief proper. So, concepts in the full sense are not even required to construct a case like Kripke’s, if we exclude perception from the conceptual domain. The puzzle really concerns intentionality in general—any kind of mental representation. It isn’t about beliefs in particular, and certainly not about names in particular. It’s about the representational mind, and clearly derives from the possibility of two perspectives on the same thing—two appearances of the same reality (in conjunction with other auxiliary factors). Kripke’s paper might well have been called “A Puzzle about the Mind”.[1]

[1] I don’t know if Kripke would reject the position here put forward, because he never denies that the puzzle generalizes in these ways. But he doesn’t explicitly accept it either; the possibility is simply left open. However, there is a strong impression that he takes the puzzle to be more restricted. I would be amazed if he had thought of these extensions but simply decided to leave them out. Nothing in the argument would be lost by generalizing it, as far as I can see.

Share

Parasites and Disgust

Parasites and Disgust

What is the evolutionary origin of the emotion of disgust, along with its behavioral expression? Why was it selected? I will suggest that parasites played a vital role.[1] A desideratum of any theory is to distinguish disgust from two other emotions easily confused with it: fear and aesthetic revulsion. Fear is much broader than disgust and not all disgust is accompanied by fear. Nor is disgust the same as revulsion at the ugly or deformed or merely kitsch; we don’t feel nausea when confronted by a burn victom, say. Disgust is something quite specific; it has a specific type of stimulus. So, what in particular in the environment of our ancestors called for the selection of this emotion? What perceptible thing made it a useful emotion to have? I hypothesize two principal types of triggering stimulus: intestinal worms and corpse-feeding maggots. There was a time when these were common sights—the time before toilets and graveyards. Feces strewn about; bodies left to rot. It would be easy to become infected in such an environment, especially by the eggs of these revolting creatures. One might tread on a turd or consume rotting flesh (food being short). Imagine a time when our hominid ancestors felt no disgust at these things, scarcely even understanding their nature: feces and corpses were not avoided or felt to be revolting. There would be rampant parasitism by the organisms in question—as by lice, bed bugs, ticks, and fleas. Those infected would suffer the consequences—malnutrition, diarrhea, bowel pain, etc. This would not be good for survival and reproduction. Such individuals would be selected against. There would be no medical treatment and no knowledge of causation. The worms would have it their way. There would be a desperate need for an anti-parasite adaptation. Thus, disgust arose—the feeling and the behavior. Worms in stool would henceforth become the object of intense revulsion and avoidance. Rotting corpses riddled with maggots would be run a mile from. Both would come to have an appalling smell. It isn’t that these things scare you like saber-toothed tigers or perilous precipices, or that they evoke strong aesthetic distaste; rather, they encourage sedulous avoidance and a powerful disinclination to eat in their presence. In the human arms race against intestinal worms, disgust became a powerful motive to take appropriate evasive action. Call this the parasite theory of (the origin of) disgust: the emotion of disgust is parasite-specific, parasite-directed; it isn’t some general danger-avoidance emotional reaction. If there were no parasites, there would be no disgust—though plenty of fear and ugliness aversion.

Unlike the pathogen theory, the parasite theory locates disgust in perceptible facts—not in invisible germs (bacteria and viruses). People had no knowledge of such things back when disgust evolved, and certainly no idea of their role in causing disease. But worms and maggots are all too perceptible, especially if they appear in one’s own stool (sorry, sensitive readers). Notice that it is not morphology alone that triggers the disgust reaction—spaghetti doesn’t elicit disgust. It is morphology in the presence of feces and corpses. The parasite would be just as abhorrent if it were shaped like a leaf or even looked like a flower. It’s the context that matters—worms in feces. This configuration would likely generalize or radiate outwards—anything to do with feces or corpses would be tainted by the disgusting. Raw meat, urine, blood, decay, the anus, bodily fluids, internal organs, insects, snakes, rotten food, certain animals, the slimy, the squirming, the dirty—all these would come to be found disgusting to different degrees. But none so much as the primal objects of disgust—intestinal worms and flesh-eating maggots. The very word “parasite” would come to elicit feelings of revulsion (it just means “one who eats at another’s table” etymologically). It is easy to attach disgust to quite innocent things by likening them to the primal objects of disgust—calling someone a “worm” or a “piece of shit” or simply a “parasite”. It is not so much that shit itself is disgusting, or even dead bodies; it is their tendency to provide a home for parasites. We don’t find caterpillars disgusting despite their similarity to worms, simply because they are never found in shit or dining on corpses. Parasitism is the culprit not its physical components. That is what revolts us, nauseates us, makes us scurry away. What are we least disgusted by? Rocks, clear water, mathematical objects—things that know nothing of parasites. We love diamonds and gold—neither parasites nor parasitized. But what if diamonds became parasitic (or always had been)—finding their way into our bowels, exiting in our stool, causing illness and discomfort? Would we be quite so happy to display them about our person? It’s not the physics; it’s the parasitic nexus. Anything behaving like an intestinal worm is going to disgust us, because it was the original reason for developing the disgust reaction; and these creatures cause a good deal of suffering and death. It is perhaps imaginable that in some possible world these self-same creatures, with the same intestinal life-style, might not be objects of disgust, simply because they are good for the host, maintaining a healthy gut, fighting off infection, preventing colon cancer, and so on. Natural selection dislikes only what hinders survival and reproduction, and in our history intestinal parasites certainly do that.[2]

How does the parasite theory bear on the meaning of disgust—the thoughts it occasions, its mode of presentation to the sufferer? In particular, how does it bear on the idea that intimations of life and death are integral to feelings of disgust? It bears on it quite naturally: intestinal worms and corpse-eating maggots areliving things ensconced in dead organic matter. Is there anything more disgusting than worms writhing in feces (yet they are just living their natural evolved life)? The living intertwined with the dead, contrasting with it. Or maggots feasting on dead flesh, deriving life from death. There is nothing inherently disgusting in life consuming death—that’s what eating is—but the parasite evokes a strong feeling of revulsion, because we are all too vulnerable to parasites ourselves. We think of ourselves parasitized, weakened, on the brink of death—that is what it means to us, what it symbolizes. Parasites could be as fatal to our ancestors as predators, and the genes know this; they must adapt or die, so they engineer a counterattack, viz. disgust. Disgust is thus hedged about with looming death, squirming life, the fight to survive—all the apparatus of mortal existence. The gods are never disgusted because they cannot be parasitized; nor do they have to worry about death. For us, though, disgust and death are never far apart—death at the hands of those nasty little parasites. There is an urgency about it, because parasites are urgent business, not to be trifled with. We are largely free of parasites these days (though not all of us), but once they were the bane of our existence—we needed defenses against them. Thus, we acquired a gene for disgust. It’s dirty work, but somebody has to do it.[3]

Like many of our emotions, disgust can seem extreme, hyperbolic, overly dramatic. Do we really need to have such a strong reaction to rodents and earthworms? That seems true of us in our present environment, but we have to remember the environment in which our emotions evolved, many hundreds of thousands of years ago. Back then, life was exposed, dirty, uncivilized, and full of unpleasantness; and it was ridiculously short and often malnourished. Disease was rampant and often fatal. In these circumstances extreme emotions were the order of the day, precisely calibrated to deal with the harsh realities of daily life. We needed to feel well and truly disgusted, or else. God knows what kind of intestinal ill-health these poor people had to put up with! Quite possibly, they all suffered from intestinal parasites from an early age—contagion would have been as easy as sneezing. Just consider a typical family’s lavatory arrangements! Then the parasite theory comes to seem eminently plausible, because the problem was so widespread and terrible (is that why Neanderthals died out?). Intestinal worms would be on everyone’s mind, because in everyone’s body, though I doubt they had much idea about what they really were (they had only de re beliefs with respect to them). The human animal needed a specific weapon to fight against them, and disgust is what natural selection came up with (killing the worms by hand would hardly be a solution). We carry the same emotion in our brains today, extreme though it may be. It is easily evoked, protean, and powerful; it can be exploited unscrupulously. If you liken immigration to parasitic invasion, you get a visceral result (literally). We need to tame the emotion while recognizing its primal evolutionary origins. We should not assume it is always rational in the world in which we now live.[4]

[1] I had not thought of this theory when I wrote The Meaning of Disgust (2011) and had not encountered it in the literature I consulted. I don’t know if anyone else has thought of it in the interim. I now wonder why I missed it.

[2] Maggots (fly larvae) can infest the skin and be hard to remove (myiasis); they too are creatures that need to be avoided and are easily contracted. Our ancestors would be as much victims of these as intestinal worms, though with fewer debilitating symptoms. And there are other types of worms that can get inside the human body and cause problems, more or less severe. Parasites are a fact of life and need their own method of resistance, beginning with the emotional.

[3] Disgust is akin to pain: an unpleasant feeling installed to prevent damage to the organism. We feel pain in response to damaging physical stimuli and we feel disgust in response to damaging parasitic organisms. And isn’t there something phenomenologically similar about the two, as if disgust were a subspecies of pain (it is certainly highly disagreeable)? Punishment can take the form of inflicting pain or inflicting disgust.

[4] The problem of disgust presents itself as a riddle (hence philosophically interesting): what biological function does it serve, why is it so extreme, why are its objects so various and seemingly disjointed? What is it really about (we know what fear is about)? I think the theory presented here does much to solve the riddle, to make sense of a puzzling psychological phenomenon. At bottom (so speak) it is about worms in the gut, though it ramifies alarmingly. It has more specificity than we might have supposed.

Share

Life: A Synthesis

Life: A Synthesis

I am going to attempt something both ambitious and modest: synthesize the various elements of the Dawkinsian view of life as we know it. We are familiar (I hope) with the pillars of the Dawkins’ world-view (zoological philosophy): the selfish gene, the extended phenotype, the genetic book of the dead (the textual body). Genes as immortal self-replicators, the organism as gene vehicle, the phenotype extending beyond the body, the informational content of the genes and the body in relation to past ancestral worlds—all of that. I will say nothing of this by way of defense or explication; my aim is purely to synthesize. How do the pieces fit together? The first thing I want to notice is that the addition of the textual body (and mind) supplements the picture of the selfish gene and the extended phenotype: for we now have the selfish textual gene and the extended textual phenotype. We already knew that the genes are symbolic (this is a commonplace in genetics) because they contain plans for the construction of bodies during embryogenesis—they symbolize bodies—but we now know that they also symbolize past worlds (sometimes lost worlds). They look backwards in time to ancestors as well as forwards in time to progeny. I would even say that they know the things they symbolize—they “cognize” them. They are thus selfish, symbolic, and cognitive (“the epistemic gene”). Genes (DNA) are both ruthless self-replicators (“selfish”) and avid story-tellers (“books”). They narrate and regenerate, represent and survive. The more they survive the more often they get to tell their stories. If they were people, they would write books and help no one but themselves—bookish egomaniacs. Literary self-advancers. In making copies of themselves they re-publish their own literary works (the information about past and future they carry). And they have a sold a million copies, to understate their market success. Not very nice maybe, if they were people, but undeniably prolific and powerful, unswervingly self-promoting. As to the phenotype (that was the genotype), its extension now includes its textual component: not just internal organs and skin but also a library of books about things. Since aboutness is a type of reference, we can say that the extended phenotype includes the reference of the symbols in these books—the things in the past that the symbols stand for. The reference relation is not “in the body”; it holds between internal symbols and remote-in-time real-world entities (e.g., deserts of the past). It’s not just beaver dams and anthills but also objects referred to—the extended phenotype stretches back in time (it includes that past desert emblazoned on the back of the horned Mojave lizard). We get the extended semantic phenotype, not merely the extended physical phenotype. The phenotype includes the external environment and reference to it. This is not the old model of a brute physical object, a biological atom; a life-form has words written into it, and their reference reaches back millions of years. We have the literary body as well as the literary gene. If the body were a person, it would be devoted to ancient history. Indeed, the outer products of an animal’s labor (nests, dams, bowers) themselves bespeak ancient worlds, containing records of ancestral life; we can read the past off them. The book of the dead exists outside the animal’s body as well as inside it—the textual extended phenotype. It’s like an actual library located some distance away.[1] That is what is getting selected by natural selection.

I would like to draw a diagram of life as conceived by these concepts, but I can’t (not here anyway). What I can do is describe a picture of life as so conceived—the picture suggested by the Dawkins biological philosophy. You are welcome to draw the picture yourself. First, draw a circle that will depict a nucleus (like an atom or cell): this will be filled with DNA molecules—genes, replicators, selfish little buggers. I see this as colored red. Write inside this circle “texts” and “me-me-me”, so that you notate the nature of the enclosed entities. Around this circle draw a larger circle named “organism” (I see it as beige); inside this circle write “vehicle”, “text”, and “mind and body”. This will be the whole organism as customarily conceived. The DNA sits inside it, protected by it, carried about by it. On the right, draw a broken line with an arrow pointing to the future: this depicts all the copies (replicas) produced by the genes sitting inside the organism. This is gene survival, the point of the entire contraption—it’s a machine for propelling genes into the future. Pointing left, we have a solid arrow harking back to the past; write next to this “ancestral worlds”. Both arrows together depict the story of the life-form in question—where it came from and where it is going. Finally, draw some arrows (two is enough) depicting the extended phenotype of the organism, perhaps with a fuzzy picture of what this might consist of (a nest, an anthill). You might notate these arrows with the words “external text” just to be pedantically correct. So, that’s it, drawing complete. It depicts a nucleus of self-replicators driving the machine forward, calling the shots, surrounded by an obedient casing of flesh and text, suspended between past and future. It is a dynamic not a static system. It is what evolution has manufactured—a device for preserving little chunks of chemical substance, ultimately. Not that the life-form is reducible to chemical substance, but the properties of the preserving body are geared to performing this task. Organisms are the result of chemical propagation, shaped by natural selection. That, essentially, is what the Dawkinsian philosophy tells us when fully synthesized.[2]

[1] The human extended phenotype actually includes our written products as well as our technology—libraries as well as locomotives.

[2] I have been reading Richard Dawkins from 1976 to now, from The Selfish Gene to The Genetic Book of the Dead and everything in between. I fancy I know his stuff pretty well. This is my brief attempt to bring it all together, neatly and comprehensively. I only wish I had more of an opportunity to discuss it with him.

Share

Economics and Ethics

Economics and Ethics

Economics is the domain of the selfish. Ethics is the domain of the selfless. So we have been schooled to think. In economic activity (exchange, purchase) we acquire goods: we benefit from what we receive; we get what we want. The act is essentially selfish—self-interested, even greedy. Ethics doesn’t come into it, except negatively. We may be berated for our lack of concern with less fortunate others. Charity belongs to ethics: to give selflessly, expecting no return. We benefit others not ourselves. The charitable person is a good person; the regular consumer is something less than that—morally neutral or morally deficient. It is better to give than to receive, we are told. The person who gives, gives, and gives is better than the person who buys, buys, and buys. The buyer is self-indulgent; the giver is self-sacrificing. The buyer cares only for himself; the giver cares for others. Egoism versus altruism, me versus you. This fits with our puritanical streak: we shouldn’t indulge ourselves; we should help others. It’s not Christian to buy, but it is Christlike to give—to sacrifice for others. What you give away you cannot spend on yourself, so charity is an act of ascetism. And the more you give the better you are. There are some who maintain that we should all give away as much of our income as will lead to material equality and stop our rabid consumerism. The charitable act is the right act; the purchasing act is the wrong act (given the state of the world).

Is all this true? There are two sides to the question: is buying morally questionable, and is giving morally unquestionable? The first question has obvious and well-known replies: in buying we give as well as receive; we stimulate the economy leading to greater prosperity; we treat others as equals not victims or incompetents. Our motives may be selfish, but selfishness can lead to good consequences; and anyway, we may take pleasure in benefitting the seller (what will happen to her if we refuse to buy from her?). A thriving active economy benefits all. In buying we give the other employment, self-respect, and a source of wealth. That is why spending is morally better than saving: saving benefits no one; spending encourages economic progress. A society of rich misers is a stagnant society. Thus, the invisible hand, accidental altruism, selfish selflessness. Egoism entails altruism. Buying is not ipso facto an immoral act; in fact, it is quite admirable in its way. But is it as admirable as pure giving (buying is impure giving)? On reflection, charity looks uncomfortably close to theft. Someone else takes from you and gives you nothing in return—they benefit from your labor. They steal time from you, in effect. That may not be their motive but they end up in the position of the thief—better off at your expense. They do no work, yet they benefit from your work. You might be a coalminer giving to a charity for unemployed miners—you work at the coal face while they lounge at home, funded by you. You may also be pressured to give in this way by social sanctions, even ostracism. You may resent it. It doesn’t feel right to you. But it’s better to give than to receive etc. What if charity were mandatory by law? Would that be a good law? Robin Hood used to rob from the rich and give to the poor—yes, but he did rob the rich. What about robbing the moderately well off to give to the slightly less well off? It’s still robbery, and isn’t robbery wrong? Isn’t giving to charity really like working for no pay? And isn’t the recipient of charity in the enviable position of being paid to do nothing?

There are other questionable aspects of charity. When does it become folly not generosity? Suppose someone of moderate means decides to give away most of his money to his well-off friends: he thus suffers a significant reduction in his quality of life while benefitting those who need no more income (they buy fancy hi-fi equipment with his donations to them). Is that charity? If it is, it is pretty stupid charity, certainly not morally required. But when does giving slide into the silly category? There seems to be no principled answer except when the recipients are in dire straits (how dire?). Second, there is the question of respect: it is natural to feel embarrassment or shame when receiving charity (some reject it on principle); and the giver can easily slip into a sense of superiority. That doesn’t happen in economic exchange. Each party both gives and receives, but in charitable giving there is an asymmetry: the giver has the better hand, is morally and financially superior, can feel a glow of self-satisfaction. No one really wants to be the object of other people’s charity. It would be better if no charity were needed. Third, charity can be morally wrong if it discourages self-sufficiency and creates dependence. It can stifle economic development. It treats people like children. It undermines self-esteem and energetic activity. It is best seen as a stop-gap measure not an ongoing policy. Fourth, it creates obligation: the recipient is obliged to be grateful and to show gratitude. Not so in the case buying and selling—no such obligation is conferred. No one is doing anyone a favor. But doing someone a favor is not doing them a favor, because favors need to be repaid. Beware of favors—they will need to be returned, sometimes with interest. Letters of thanks were the traditional way of responding to an act of giving: these took time and energy to write, and they had to be good. You don’t have to do that if you buy something from someone. The recipient becomes indebted, and nobody enjoys that. Don’t you feel a touch queasy when receiving presents that you don’t reciprocate (birthdays as opposed to Christmas)? You become the object of someone else’s generosity; you feel a powerful obligation to express gratitude in fulsome terms, not always sincerely. You are put in an awkward situation. Doesn’t a part of you not want to receive presents—to become the recipient of someone else’s charity. Charity is thus hedged about with moral perils, not always clearly avoided. It would be a better world if it were not needed or practiced.

The general point is that buying gets a bad name while charity is overly prized. I’m not saying charity is never morally indicated or required, only that it is not the unalloyed virtue it is commonly supposed to be. It is in the nature of a regrettable necessity as the world stands not an indispensable component of the ideal society (in that society charity does not exist). Nor am I saying that all buying is good; it depends on the consequences (also the motives). But it is not bad in virtue of being self-interested—that is a misplaced indictment of the economic act. The economic act is a bit like the sexual act—a two-way street in which both participants benefit (ideally). It is reciprocal, symmetric, mutually beneficial, not a sacrifice by one to bring aid to another. Economics is ethical in this sense—generally a good thing. It is not a domain in which the ethical is irrelevant (or “unscientific”). The conventional division between the amoral economic and the moral charitable is too simple. John Maynard Keynes viewed economics as a “moral science”, and he was right to do so. So was Adam Smith right to emphasize the power for good inherent in economic activity, as opposed to pure altruism. The idea of the virtue of self-sacrifice plays too large a role in the old (Christian) way of thinking. Selfishness can be good, and unselfishness less than good.[1]

[1] Smith and Keynes were both interested in economics as a means for improving the human condition, and therefore morally motivated. Economic action is a type of moral action, to be assessed as such, positively or negatively. Sound economics is sound ethics. We don’t hear much about this in the Judeo-Christian moral tradition—entrepreneurial virtue, business ethics. It’s mainly prohibition and self-denial, not enterprise and self-assertion. One does not hear any commandments to build and develop, invest and labor, keep interest rates down, avoid inflation and deflation. Wherein is it stated that it is God’s will to divide thine labor? On the contrary, the money changers are regarded as the epitome of depravity; but aren’t they just currency traders with a place in a thriving economic system? Money is not the opposite of morality but one of its tools. Charity is what we resort to when economics fails. Economics should recognize its ethical dimension and ethics should welcome economics to the fold. Productivity is good.

Share

Archival Minds

Archival Minds

Richard Dawkins’ The Genetic Book of the Dead (2024) advances the thesis that an organism’s body is like a book describing ancestral environments. The genes encode facts about how the world was when the organism containing them evolved. We can thus infer the past state of things from the current state of an organism. The first chapter “Reading the Animal” gives the example of the horned lizard of the Mojave: its skin can be read as “a vivid description of the sand and stones of the desert environment in which its ancestors lived” (4). Then Dawkins states his “central message”: “the whole body through and through, its very warp and woof, every organ, every cell and biological process, every smidgen of any animal, including its genome, can be read as describing ancestral worlds” (4). We can call this conception “the textual body” (syntactically like “the selfish gene” and “the extended phenotype”), though Dawkins himself does not employ that phrase. Natural selection sees to this, because an animal must be adapted to the environment in which it evolves—not to anyenvironment. In particular, we cannot deduce the organism’s present environment from its present body, since things may have changed (nor its future environment). The textual body is an essentially historical text—ancient history, in fact. We can expect to have a chapter or two on life in the sea, even in organisms long living on land, because their ancestors lived in the sea. The human body will contain a description of life at sea, overlaid by more recent chronicles. I will extend this idea to the mind of an organism, if it has one: the mind too is saturated with talk (text) of how things used to be in our ancestors’ worlds.[1] The mind is a book of ancient lore, of distant pre-history, of bygone formations. Sherlock Holmes could deduce from it all manner of facts about how things used to be. It might even tell us things about the past that we wouldn’t otherwise know. The mental text might be esoteric.

What kinds of facts might it disclose? Facts about the geological environment, the predatory environment, the social environment, the climate environment, the cosmological environment. And what aspects of the mind might do this? All (inherited) aspects, if Dawkins is right (and I don’t doubt he is): perceptual, sensational, emotional, cognitive, linguistic, structural, qualitative, etc. The mind could be as historically informative as the body, except now we are inferring the physical from the mental not the physical from the physical (skin to desert). But we are going to need to be bold, because this stuff is shrouded in mystery, in the mists of time (“mistery”). Even if our best guesses are wide of the mark, they can provide a taste of what this kind of hermeneutics will involve—giant leaps of imagination (it isn’t all as easy as the lizard in the desert). We are trying to read the distant past of the physical world from its traces in the contemporary mind: from inner to outer across enormous stretches of time. So, hold onto your hats, cut loose, never fear! Let me begin with pain: what can present pain tell us about past environments? I think it is clear that the present sensitivity to pain in mammals is surplus to requirements: we just don’t need to be as stuffed with pain receptors as we are, and with things as painful as they tend to be. We are over-pained. Why? First, observe that fish do not appear as pain-rich as we are (we mammals); sure, they feel pain, but it is not at the mammalian level. We might well suppose that this is because their environment is not as full of pain-inducing stimuli as ours: they float comfortably in water, not in contact with rocks and pointy objects. They don’t fall down on hard rock or get hit by flying objects or regularly cut themselves, as we do. Now consider the transition from sea to land—fish-like creatures stumbling across rocky terrain, falling, getting cut. They need to develop better pain receptors, pronto, or else death will pay them a visit. So, they become exquisitely sensitive to injury of all kinds; their soft bodies become equipped with pain generators (not hard insensate shells). Once installed these adaptive mechanisms remain, even when the environment becomes kinder to their bodies. Thus, we can infer from mammalian pain surplus that life came from the sea. If the seas had vanished from the earth in the interim, we could deduce that seas once existed, in which life flourished: for that is the best explanation of our current talent for feeling too much pain. Our painful minds entail past seas in which our ancestors (fish) lived. That was once our environment, so that environment had to exist back in the day. A watery past follows from a surfeit of pain. The existence of H2O can be inferred from the existence of pain; not present H2O, mark you, but past H2O. There is (excessive) pain on earth now, therefore there was water in the shape of seas then. Of course, we already knew that, but what is interesting is that it can be deduced from facts about the mind, if we allow ourselves some imaginative leaps. As the existence of the self follows from the fact of thinking, so the existence of past watery expanses follows from the fact of hyper-painfulness (there should be a Latin term for this type of inference). Suppose sea-dwelling creatures felt no pain at all, while landlubbers did, and that the latter evolved from the former. That would tell us there had to be an adaptation to pain during the transition to land, so we can infer an oceanic lifestyle from our present over-sensitivity to pain (imagine if mammals were more armor-plated now yet still pointlessly felt intense pain).

That was a primer in textual-mind reasoning, intended to dip you into the deep end (pun intended). We won’t need to get quite so speculative as we go along. Consider, then, visual sensations: they require the existence of light—they are as of things bathed in light (the world doesn’t look dark all the time). These sensations evolved many millions of years ago and were adapted to the then-environment. The world might have gone dark between then and now, yet the sensations would still be as of light. We might be living in total darkness but our visual sensations would still be suffused with appearances of light; the Sun might have gone out of existence centuries ago. We can’t infer the real existence of light now from our sensations of light now, but we can infer the past existence of light from our present sensations of it. That is, we can infer the existence of the Sun at the time that sensations of light evolved—say, 400 million years ago. We know that the Sun existed back then because visual sensations were adapted to light and that is where light on earth comes from. We can deduce astronomy from biology! Visual psychology implies a star radiating light energy to earth: we can read this in our psychological book of the dead. There had to be a sun 400 million years ago whose light reached earth or else light-filled visual sensations would never have evolved. Visual phenomenology implies stellar astrophysics. If we encounter aliens with a similar visual phenomenology, we can infer that they evolved within striking distance of a sun. If the universe contains sensations of light, then it must contain suns, given reasonable assumptions. This is fundamentally because of the way natural selection works to produce animals. The same argument can be given with respect to sensations of space and time: sensations of these dimensions can only exist because of the real existence of space and time, given the Darwinian theory of evolution. The sensations had to have originated in space and time in order to be of space and time, because they are adaptations to space and time. Space and time could go out of existence and sensations of them remain, but they had to exist in order for the sensations to arise naturally. Of course, we already know that space and time existed during the evolution of minds, but the textual mind theory allows us to infer this from the way minds now are. The evolved mind is a repository of historical (cosmological) information.

Emotion works the same way. Present emotions betoken past realities. The point is familiar enough: fear of heights implies a past life in the trees; fear of snakes implies an abundance of dangerous snakes way back when; disgust at insects indicates a plethora of annoying insects in olden times; fear of wild animals (especially big cats) suggests a history of predation by same in the unprotected past. Then too, we have the things we like: floating in water, climbing trees, relaxing in the sun, a taste for certain kinds of food. These conjure up an aquatic past, an arboreal homestead, outdoor living, available past nourishment. The book of the mind describes the bad, the good, and the ugly—what life on earth was like long ago. It describes the way the world was when our ancestors first set foot (or fin) in it. We know from our present mental attributes that the world contained depths of water, habitable trees, warm sun, and strawberries (or other fruit). The world had to be a certain way then for animals to have the sorts of mind they have now. And it is possible for the world to change and yet the mind plods on in its old ways; environments on earth do not always remain constant. For example, the human environment has become far less dangerous than it used to be with respect to wild predatory animals; we don’t die from cat attacks that often these days. Yet we seem remarkably fearful of not very much—hence phobias, groundless anxiety, exaggerated fears. Is it that we are suffering from a holdover from the bad old days? If so, we can reasonably infer that the past contained more frightening things than the present does—that it was objectively more dangerous then. Our emotional minds haven’t caught up with the new realities. We can deduce from our excessive fears that in the past things were a lot nastier than now—that we humans had it worse then. Emotions are actually a rich source of historical information, because they speak of what most concerns us—our wellbeing, life and death. Our present overdone emotions tell us that people died earlier in the old days, and probably in nastier ways.

Less familiar is the question of more general and abstract features of the mind in relation to our evolutionary past. This question is of special interest to philosophers. First, what can our reasoning abilities tell us about the past, in particular our inductive reasoning? They tell us that in the past the world was regular and predictable—that nature was uniform. For, if it were not, we would never have evolved the habit of predicting the future from the past; it would have done us no good. If the world were chaotic, induction would be useless; but it is not useless, so the world is not chaotic. Or rather, in the past nature was uniform because we evolved the habit of assuming as much; it wasn’t non-uniform then. It might become non-uniform, but we know it was once uniform—or else induction would never have got a foothold in the animal mind. Again, we have good reason to believe this on other grounds, but the textual mind delivers the same result from a fresh angle. The archives of the mind keep a record of how things were, and things used to be reassuringly uniform (and presumably always were, given that nature would not suddenly turn uniform when intelligent inductive animals began to evolve). Inductive reason is a snapshot of nature’s inherent uniformity. What about intentionality? Can we deduce anything about past reality from intentionality? Intentionality is what permits the mind to take distinct and distinguishable objects as objects: I am thinking of this thing and not some other thing. This capacity must have evolved at some point, probably at the very birth of mind; and it must reflect some general truth about the world in which it evolved. What is that general truth? Simple: that the world consists of discrete objects distinguishable from one another—pebbles, people, points of light. If that were not so—if the world was a formless mass—then intentionality would never have evolved (not the kind that animals on earth actually have). Intentionality implies objective discreteness. This is an interesting result, because it shows that intentionality has objective preconditions; it isn’t just a brute inexplicable fact about the mind. It evolved in order to exploit the granular structure of the world. The two things go together, the one implying the other. This quality of mind speaks of the pluralism of reality, of its separation into parts. Finally, consciousness: what does its existence tell us about reality in the past? It is hard to say, but here is a speculation: it tells us that the past was complex. Consciousness as it now exists seems tied to complex information processing, requiring a complex brain. Let’s suppose this is so; then we can say that it evolved under environmental conditions of complexity (say, in the context of predation). If the ancestral world was never complex, never a problem, consciousness would never have evolved—reflexive behavior would suffice. So, consciousness tells us that the world was challenging and complex at the time it evolved. We can read this off consciousness, projecting backwards in time. The unconscious, by contrast, could be less complex, more reflexive. At least this gives us something to say about the significance of consciousness in the book of the dead. Every trait must have somemessage about the early environment and this gives us an idea to work with. It’s like memory: memory tells us that the passage of time existed in our evolutionary past and that the lifespan of our ancestors made it a useful trait to possess. There were facts to be stored and that’s why memory evolved; we can infer the former from the latter. There were complex problems to be solved, so consciousness came to exist; there were facts to be stored, so memory came to exist. The world presents certain kinds of phenomena and natural selection acts accordingly; we can infer these phenomena from the present contents of the mind. The mental archives are full of interesting bits of information, relics of a bygone age, or universal facts of terrestrial existence. The body and mind together yield a rich history; they mirror the past of planet earth. The lizard’s skin reflects the desert it lived in; the mind’s attributes likewise reflect the wider world in which they came to exist. There is a kind of holism at work here linking the organism to its (original) environment. The life of the world is recapitulated in the life of the organism. Phenotype implies geophysics; the earth is written into the organism, its body and mind. The self is a semantic self—it represents what lies (or lay) outside of it. It makes a statement (in the past tense).

Actually, the organism is doubly or triply semantic. First, and basically, we have the book of the dead (in Dawkins’ phrase): what the organism’s make-up tells us about ancestral worlds. Second, we have the genetic code with its information inherited from the past: textual DNA, the genetic archives. Third, and more limited, we have actual human language: its grammar and meaning. That too is a mental trait that should yield information about the past, specifically the time in which it originally evolved. One thing we can immediately infer is that human groups existed at this time (I am speaking of overt speech not a language of thought). Human speech evolved to aid communication, so there were people to communicate with. Speech implies groups. The presence of nouns and verbs surely implies that there were things to talk about and actions these things performed—scarcely a surprising result. Various ontological assumptions are built into language, so we can assume that the world satisfied these assumptions at the time language evolved (why would language make these assumptions unless they were true, or approximately true?). I need not spell these out. The human animal is multiply semantic (in a suitably broad sense) at different biological levels: body, mind, genes, spoken language. There is symbolism everywhere—worlds within worlds, text upon text.[2] We can use our language to talk about these other “languages”, which pre-date spoken language. Symbolism is nothing new, “books” are rampant. We “spoke” before we ever spoke. Animals are full of information, if we only know where and how to look. The selfish gene is a symbolic gene; the surviving (and reproducing) body is a symbolic body; the communicating human is a high-level symbolic operative. We (and other animals) are veritable libraries, stack upon stack of esoteric volumes, or commonplace announcements. We accordingly need to be interpreted, deciphered, translated. We are not semantically transparent. Our mind can be as obscure as our body (or our genes). Still, it is possible to read its hidden messages.[3]

[1] Interestingly enough, Dawkins doesn’t apply his theory to the mind, except for some stray remarks about fear of heights. I don’t know why; maybe it has to do with the “incorporeality” of the mind.

[2] These are not the only symbolic structures crowded into living organisms: there are also mental images, perceptual primitives, contents of desires and emotions, unconscious computations, mental models, signaling systems, and (according to some) immune systems. Each of these differs from the others. Really, living organisms are hives of symbolic activity, none more so than the human; the book of the dead is just one more to add to the list. The model of a physical machine does not do justice to this symbolic plethora.

[3] Some may say it is mere metaphor to call the body, mind, and genes repositories of language. We need not dissent from that, but it is purely verbal: true, other symbolic systems are not symbolic in the way human spoken language is, but then neither is human language like them. There are “languages of art” and “whale language” and “computer language” in that these comprise symbolic systems. There is no good reason to suppose that human language is somehow the measure of all types of symbolism. And what parts of human language—nouns, verbs, intonation, stress, pitch, pauses? There is a family resemblance between all these coexisting symbolic systems.

Share