Genius Project

Genius Project

Some years ago, I came up with the idea of the Genius Project (like the Manhattan Project). This was prompted by a desire to help graduate students in the job market—to make them stand out from others (having publications being neither necessary nor sufficient). But the idea can be applied to developing intellectual creativity generally. I had been interested in the subject of creativity since my undergraduate days as a psychologist, having read Arthur Koestler’s The Act of Creation and Jacques Hadamard’s The Psychology of Invention in the Mathematical Field. Can we encourage creativity in ourselves and others or must we wait for the muse to strike (or not, as the case may be)? Can people be trained to be more creative? This would clearly be a very valuable type of education, but is it possible? It seemed to me that it ought to be, and I had some ideas about how to do it. It was worth a try anyway. I now propose to share these with you.

My first piece of advice is this: when you wake up in the morning go straight to your desk and stare at a blank sheet of paper with a pen in your hand (easy right?). You can have a libation, but don’t eat anything. Don’t talk to anyone or read anything or check your email. Then ask yourself what interests you and sit there thinking about it. Write down whatever occurs to you, even if it seems feeble. If you can’t write anything, don’t move, just stay there. Do this for at least half an hour. Repeat the same thing the next day and every day; make it part of your routine, your daily life. Let your mind dwell on the topic for the rest of the day as you go about your business. Add to the page any further thoughts you may have. Your unconscious will do a lot of the work for you. Read anything relevant to the topic at whatever time suits you, but don’t read during your creative time (call it that if it helps). Do this at the beginning of your studies; don’t wait till you think you know enough before you start being creative. Get into the habit of being creative. I think you will find that your brain will work on the subject during the night in anticipation of your morning routine; it will know what is being demanded of it. The principle is very like practicing a musical instrument or an athletic skill—a sort of brain-shaping.

Next you need some exercises that flex your creativity muscles. Several can be suggested but the one I prefer is simple and effective: think of variants of the phrase “cruising for a bruising”. This is simple to do, suitably taxing, and good mental fun. Thus: angling for a mangling, strutting for a gutting, aiming for a maiming, hiking for a spiking, heading for a beheading, gliding for a hiding, rushing for a crushing, strolling for a rolling, skipping for a whipping, lurching for a birching, training for a braining, accelerating for an eviscerating, travelling for an unravelling, praying for a slaying, escaping for a raping, streaming for a creaming, crawling for a mauling, ambulating for an amputating, tobogganing for a flogganing, etc. Try to follow the rules of the game, but you can let yourself break them slightly if the result is good (as in my last example). Playing this game competitively with other people is permitted and good clean fun. You can add the shortest books game if you feel like it. This will get you used to inventing stuff.

You should also read verbally creative writers. I always recommend Nabokov, with Lolita as the best specimen. Read it for the language not the story, a page at time. Notice his verbal tricks and resistance to cliché. Try to copy it. Lewis Carroll is good too. I also think you need to develop your sense of humor, because humor often involves novel ways of seeing things as well as verbal dexterity. I particularly recommend Oscar Wilde, Max Beerbohm, and P.G. Wodehouse. A sophisticated sense of humor is close to intellectual creativity—not least because it questions current pieties and stock responses. People often tell you to “think outside the box”—that’s okay but rather crude. Think outside what other people think is a better way to put it. Criticism is essential to creativity. Richard Dawkins’ The Selfish Gene is a critically creative work, a rethinking of accepted facts. It’s wrong to say that creativity requires being you—it’s a good idea to imitate other creative people. Imitation is a useful way to learn, because the brain is set up that way. Copying helps with being creative, at least in the early stages. Autobiographies are useful. I also think it is good to have wide interests, so that you see unexpected connections. Specialization is the death of creativity, even if it feels safe.

Of course, there is no algorithm for creativity (“genius”). But I think there are ways to stimulate it. Personality is also a factor. Conformity won’t do at all, or a desire for acceptance and popularity. You have to have some guts. You need to be some sort of contrarian, maverick, revolutionary.[1]

[1] Isn’t it strange that we pedagogues never try to teach creativity directly, though we clearly value it? We appear to think it will come automatically or it won’t come at all (we don’t think this about logic and rationality). Perhaps we think the whole thing is inscrutably mysterious, a gift from God. I resist such defeatism!  We need to get more creative about teaching creativity.

Share

Second-Best Philosopher Ever

Second-Best Philosopher Ever

Skipping preliminaries, I am going immediately to nominate Bertrand Russell. It might be thought that he can be ruled out by the principle that later philosophers have absorbed his work and so surpass him trivially. That would certainly be true of his contemporaries and predecessors, but Russell wrote so much that it is hard for anyone to absorb all of it (how many people have actually read Principia Mathematica?). I have read a lot of Russell, from The Analysis of Matter to Marriage and Morals, and reviewed at least three biographies of him; but there is still a lot I haven’t read (I’ve never read The Practice and Theory of Bolshevism). So, there may be stuff that has not been absorbed by contemporary philosophy, or simply forgotten. But my main reason for choosing him is that I think he would be best able to master and contribute to contemporary philosophy: he had the brain and the breadth. Also, much of our current philosophy was shaped by him. He was a great writer, an original thinker, often right, and massively erudite. You can imagine him today dominating the subject, as he once did. He doesn’t strike us as belonging to the past. I do sometimes wonder about Gareth Evans, who clearly had enormous potential; but his life was taken from him at such a young age that it is impossible to predict his later development. What would he have achieved if he had lived a full life? Would he have branched out from the somewhat narrow field of interests that preoccupied him in his youth? I can go both ways on this question. In any case, it is impossible to say. But Russell had a long active life and demonstrated his intellectual powers to the fullest. He was never much for ethics and aesthetics, and he was somewhat stuck in a crude form of empiricism, but he obviously had broad abilities, scientific and literary. He was ahead of his time and extremely clever. He outshone his contemporaries. It’s hard to think of anyone at his intellectual level. Diehard Wittgensteinians might dispute his title to second-best—they might even make Wittgenstein the very best (!)—but Russell certainly has a strong claim to the coveted title. If he had been acquainted with the best, he would probably have won the race; but that figure lay in the future. I think we would have been friends; I would certainly revere him. Still, he strikes me as the strongest candidate for second-best, despite my admiration for many others (long since dead). I believe that in a not-so-remote possible world Saul Kripke might have been second-best, but we are talking actualities now; he just lacked breadth and didn’t write much. I also have a soft spot for Berkeley (if it were not for the religion), and find Hume adorable (like everyone else). Most current philosophers I disqualify for being too specialized and too mired in contemporary professional norms. I therefore happily nominate Bertrand Russell for the number two position. Come on up here, Bertie!

Share

Descriptions and Non-Existence

Descriptions and Non-Existence

Semantics would be easier if there was no such thing as non-existence (if non-existence didn’t exist). Then we could simply assign an existing reference to any referential-looking term. We wouldn’t have the problem of empty terms: all meaning would be explicable by means of existing entities. We would have a fully denotational semantics, perhaps supplemented by Fregean sense. In particular, we could assign an existing reference to any definite description—we would have no problem of empty descriptions. This problem is what led Russell to propose his theory of descriptions, which reshapes the logical form of description-containing sentences (they are not singular terms at all). That theory would not be needed if non-existence were not a thing. Of course, some philosophers (e.g., Meinong) have denied that so-called empty descriptions are really empty: they propose a kind of shadowy existence (“being”) for things like the golden mountain or the king of France. These things exist in another realm, or in a different way, or to a lesser degree (hence they are said to “subsist”). Russell wasn’t happy with this kind of ontological largesse, so he was thrilled with his shining new theory; but its entire motivation depends on an ontological assumption, namely that some things don’t exist (or subsist either). That is, the attraction of the theory depends on an ontological position or presupposition; it makes semantics depend on ontology (metaphysics). If Meinong were right, we wouldn’t need Russell’s theory—we could just stick to the practice of assigning denotations to descriptions. We wouldn’t need an alternative revisionary anti-referential semantics: quantificational, conceptualist, formalist, fictionalist–whatever avoids an unpalatable Meinongian ontology. All this arises because of the problem of non-existence—a problem in metaphysics. It doesn’t arise by direct consideration of the sentences in question; it arises from elsewhere. It isn’t as if Russell took a long hard look at definite descriptions and saw that they conform to his paraphrase; he inferred that his theory must be true or else we land in ontological hot water (the fetid Meinongian swamp). He deduced semantics from ontology.

But what if he was wrong about the ontology? What if Meinong was right all along? I am not saying he is right; I am just saying that he might be. It is a substantive metaphysical question—and we don’t want our semantics to be hostages to the fortunes of our metaphysics. Suppose for a moment that Meinong is right: then Russell’s theory is unnecessary, a revision without a reason. For there is nothing intrinsic to descriptions that warrants its adoption. This is why no one proposed it before—it looks wrong. It has counterintuitive consequences. It is complicated and contrived. Students find it hard to understand. No one would talk like that. Only the desire to escape the clutches of Meinong could make it seem attractive. To a Meinongian, it seems like a desperate attempt to avoid the obvious truth. Think of it this way: in a possible world in which Meinong is right Russell’s semantics is unnecessary, pointless, and unattractive. If we are actually living in that world, it is bad philosophical linguistics. But surely, we don’t want to take that risk—we don’t want our semantics to hang by an ontological thread. Semantic theories shouldn’t be justified by ontological considerations alone. It would be different if Russell’s theory had other cogent justifications, but in fact its sole motivation derives from (questionable) ontological assumptions (and did Russell ever really refute Meinong?). It is therefore methodologically misguided. It needs to be established on quite different grounds. It’s like trying to justify a non-referential semantics of predicates by insisting that Platonic universals don’t exist; maybe they don’t, but you don’t want to adopt a revisionary semantics of predicates on that basis alone. This is a debatable metaphysical question, not a datum that can be wheeled in to rule out a perfectly natural semantic theory (viz., predicates denote universals). You may have an ontological beef with physical objects, but do you want to keep them out of semantics when they seem like the perfect tool for the job? And you might be wrong about the non-existence of physical objects, in which case you have ejected them falsely from your semantics.[1]Don’t mix up semantics and metaphysics! Don’t let your metaphysical views shape your linguistic views! In particular, you shouldn’t let logical form depend on ontology, which is exactly what Russell does (this is why his theory is so exciting). Grammatical form doesn’t depend on ontology, so why should logical form? The meaning of “the” should not be made to depend on whether the golden mountain exists.

Let me try to make the situation vivid by constructing a thought experiment of a familiar form. Suppose that in the actual world there are no Meinongian objects, and suppose that speakers know this. Russell proposes his theory and everyone is happy with it. Now consider a possible world in which there are Meinongian objects and everyone believes in them. Wouldn’t it be correct to say that Russell’s theory fits the actual world but not the stipulated possible world? We could even suppose that the speakers in both worlds have no opinion on the truth of Meinongian ontology and are precise physical duplicates with the same internal mental states. Then an externalist will want to say that definite descriptions have different meanings in the two worlds in virtue of the external ontological facts, despite the internal identity of the speakers. Logical form will accordingly be different in the two worlds. If we make semantics depend on ontology, this is the kind of result we get. But doesn’t it seem wrong to make logical form depend on such facts? The ontological difference shouldn’t generate a semantic difference in respect of logical form. Russell thinks the lack of existence forces a revisionary logical form, but in the Meinongian possible world there is no such pressure for discerning that kind of logical form. Better to deny that the facts of ontology can determine semantics to this degree. Individual meanings may not be (completely) in the head, but surely logical form is (grammatical form certainly is). Something is wrong with Russell’s methodology.

Suppose you are agnostic about Meinong’s ontology: should you then be agnostic about the semantics of descriptions? If you are an agnostic about God, should you be an agnostic about the meaning of “God”? No, you want a semantic theory that is neutral with respect to such ontological questions. It is true that Russell’s theory is neutral about the existence or otherwise of the objects that the description purports to describe, but it is not neutral in its motivation—it presupposes that Meinong is wrong. That puts the theory in a needlessly precarious position, since the presupposed metaphysics might be mistaken. One wants to be able to defend the theory on less vulnerable grounds, by appeal to the very nature of the description. But its revisionary character makes this difficult (as Strawson in effect pointed out): the description seems very much like a singular term.  We get a rough equivalence in Russell’s paraphrase, but not a precise synonymy. We can’t even apply the word “refers” or “denotes” to the description if Russell is right. So, the linguistic data don’t prima facie support the theory; its support comes from a presupposed anti-Meinongian ontology. And isn’t it true that the historical enthusiasm for the theory came from its anti-Meinongian credentials—its rejection of Meinongian extravagance? But that is a frail basis for a semantic theory—the wrong kind of basis. It’s rather like saying that proper names don’t name people because you don’t believe in selves (perhaps on Humean grounds), proposing instead that they are not singular terms at all. In a way Russell’s theory is not psychological enough (psycholinguistic enough); it relies too heavily on theses concerning things outside the mind. It needs to be more internalist (in a Chomskyan sense)—more about the brain or the cognitive-linguistic system. What empirical evidence is there that definite descriptions are really quantified conjunctions? Where is the cognitive science that demonstrates that linguistic proposal? From this point of view, the rightness or wrongness of Meinong’s ontology looks irrelevant.[2]

[1] What if the speakers of the language are explicit Meinongians, have been for centuries, have it in their genes? Are we to say that their language is Russellian? They will openly disagree with his theory, perhaps regarding it as preposterous, so how can it be the true theory of their definite descriptions? Russell’s ontology is not theirs, and neither is his semantics.

[2] The issue can be compared with possible worlds semantics: how can the existence or otherwise of possible worlds determine the viability of a semantic theory of natural language modal expressions? That is a metaphysical question extrinsic to the syntax and semantics of such expressions. What people think about possible worlds might be relevant, because it concerns psychology, but the actual existence of them seems beside the point. Certainly, it is hard to justify a semantic theory just by asserting or denying a particular ontology. Ontology and semantics are separate domains.

Share

Expressions of Belief and Desire

Expressions of Belief and Desire

Darwin investigates the expression of emotion, leaving out thought. He also says nothing directly about belief and desire, but we can attempt to fill that gap. Are there characteristic expressions of belief, disbelief, desire, and lack of desire (antipathy)? We can think of emotional expressions as the extended psychological phenotype of an animal: not just what is internal to the animal but also its outward manifestation in the face and posture—the total emotional complex that is subjected to natural selection. The proper scientific name of this complex trait is the Extended Expressive Psychological Phenotype (EEPP)—external expression (particularly facial) plus internal state of mind. Darwin gave us several EEPPs for emotions—can we do the same for belief and desire? I don’t see why not, though hard empirical data are scanty. Then we would have two types of behavioral manifestation for belief and desire: expressions and goal-directed actions. Folk psychology would recognize two forms of externalization for these twin pillars of the mind. You act on your beliefs and desires to achieve your goals and you also express these mental states in your face (and possibly other bodily parts). There is a bodily duality. The same would be true of other animals, and the package would be inherited (genetically coded). Psychophysical laws could be formulated, predictions made. There may be a divergence in the two types of bodily manifestation. So, what does this mode of expression look like (literally)? How is the face configured during periods of belief and desire?

First, we need to recognize that belief and desire are dispositional and do not manifest themselves in the face and body at all times. We are searching for expressions that correspond to belief and desire occurrences—upsurges, conscious episodes. What does the face look like when someone is actively believing and desiring? From my preliminary researches, we can assemble the following picture. For belief we typically have an open-eyed gaze, slightly elevated or unmoved eyebrows, a relaxed mouth, a slight smile, nodding, and a forward lean of the body—a receptive, unsuspicious look. In extreme cases, such as a religious gathering, we might witness more spectacular expressions—shouts, dances, uplifted eyes, an attitude of general excitement and pleasure. The look of the faithful, or the deeply convinced. For disbelief we have a narrowing of the eyes, an averted gaze, a lowering of the eyebrows, a furrowing of the brow, pursed lips, a downturned mouth, a shaking of the head, a look of distaste bordering on fear in some cases. This is the look of someone resistant to persuasion—a rejecting negative look. There might be actual rolling of the eyes, wrinkling of the nose, eyelids fluttering. These are the signs of agreement and disagreement, respectively, more or less vehement. They tell you what the person really thinks about the subject at hand. They may be highly attenuated and scarcely perceptible; they may also be intentionally suppressed altogether, though liable to assert themselves when no one is looking (we sometimes have every reason to keep our beliefs and doubts to ourselves). We have a yes-face and a no-face, an assent-face and a dissent-face. These faces are widely shared and sometimes human universals (as Darwin suggested for emotional expressions). There will be the usual mixture of the innate and acquired, the involuntary and voluntary, the instinctive and culturally conditioned. They tend to be processed at a subconscious level and are not usually explicitly articulated by the observer. We might just say “You look skeptical” or merely note that the interlocutor looks to be onboard or in tune with the brotherhood (or suitably brainwashed). The look on the face tells us all we need to know and we are skilled at face-reading (we don’t need a verbal commitment or long-term observation of the person’s non-linguistic behavior). The facial expression is a kind of shorthand, useful for knowing where we stand. It is a quick and easy way for belief to show itself.

What about desire? Here the situation is even clearer, because desire is close to emotion. The animal’s face and body will tell you what it wants and doesn’t want. It wants food but it doesn’t want confinement. A dog will show its desire to go for a walk with its tail, barks, and eager eyes; and its lack of desire for a trip to the vet or a hot bath. The characteristic expression of human desire is a focused determined look, a look of anticipated (or actual) pleasure—open bright eyes, a salivating mouth (or some equivalent), a chomping at the bit (as when hungry and about to eat). An absence of desire (or actual antipathy) will show itself in a droopy listless posture, open distaste, a disgust face, a faraway look in the eyes. It is easy to decode such signs for even the moderately competent social observer. Desire efficiently reveals itself, though here again there may be reasons for concealment, which can be more less difficult. What a person says he wants may not fit what his body is signaling. We therefore have two epistemic routes to the mind–bodily expression and ordinary intentional action—and they may not tell the same story. But they usually do, so we have a kind of epistemic overdetermination. Thus, facial expressions can act as lie-detectors, because they can come apart from verbal declarations, especially in the case of children, who have not yet developed the skills of concealment. The best subjects for research are indeed children—we can examine (as Darwin did for emotion) the forms of expression children manifest when agreeing or disagreeing, desiring or not desiring. As adults, we tend to guard our beliefs for fear of interpersonal conflict, but young children are subject to no such inhibition—they let it all hang out (they are flagrant externalizers). Someone should make a study of Dissent Expression in Children—it might well follow a developmental schedule analogous to Piaget’s cognitive stages theory.

Philosophers have considered belief and desire from the point of view of the explanation of action. They constitute the reasons for action. But they have neglected the role of belief and desire in relation to expression—a quite different kind of bodily outpouring. People don’t (usually) raise their eyebrows for a reason; they just spontaneously do it (or their body does). This, too, is part of their nature—their nature in Nature, as it were. We might call it part of their animal nature, intending no disrespect—it is an aspect of their inherited biology. Even belief in elevated matters (morals, mathematics) has its bodily expression; the form of the face is part of what belief naturally is. The facial muscles, the eyebrows, the mouth—all play their part in broadcasting belief. Language is really a latecomer to the biology of belief; long before language the face was conveying someone’s state of belief. Where there is a face, there is belief, roughly speaking. Let’s not overintellectualize belief; let’s recognize its place in the biology of the organism. Darwin’s discussion of human emotion located it (partly) in the physiology of the organism, stressing its continuity with the emotions of other animals (thus producing incidentally a more enlightened attitude towards animals); I am doing the same thing with belief. The face, we might say, is the face of belief.[1]

[1] It is interesting how little the face has interested philosophers, given its centrality in human life. Even existentialists say little about it, let alone analytic philosophers. The brain, yes, but not the countenance, not the thing we gaze on every day, if only in the mirror, and try to interpret. The face fascinates but it doesn’t attract the attention of the typical philosopher. How does the content of belief (and desire) shape the facial expression? How much facial detail mirrors what lies within? What is meant by “expression” here? How would the lack of a face change our affective life? Would facial paralysis paralyze the affective mind? What is the function of expression? Would inversions of expression be possible (snarling in place of laughing, say)? Is the connection arbitrary or principled? How strong is the correlation between facial mobility and intelligence? What would it be like to have more than one face?

Share

Dumbocracy

Dumbocracy

It’s official, we are now living in a dumbocracy (OED “government by the dumbest”). We used to live in a democracy, but (as Plato predicted) democracy has an inherent tendency to degenerate into dumbocratic rule. The causes are somewhat mysterious (political scientists are baffled) but it is marked by the rise of ignorance, stupidity, and aphasia. In our case we have decided to go outside the human species for our governing elite. We have a muskrat in charge of federal employment (a muskrat is defined by Wikipedia as “a medium-sized semiaquatic rodent”): the Elon (short for “elongated”) Muskrat is known for its predatory behavior towards vulnerable animals. Then we have the Hegseth baboon noted for its loud cries and general uncouthness. Also, the striped Gabbard bird that feeds on small insects and was once regarded as harmless, accompanied by a croaking and florid Krocodile over at Health. The obscure Homans Simpletons mainly sticks to chasing powerless newcomers around. And, of course, we have the apex scavenger, the greater orange-faced Grump—a kind of bequiffed holdover from the pre-Neanderthal era (previously thought to be extinct but apparently still with us). Some say he is morphologically similar to the definitely dead Adolf monkey, but most taxonomists now classify him along with mythical beasts that mesmerize idiots and fools. This specimen is now the head of our dumbocracy and is indeed ideally suited to the role: he can hardly construct a coherent sentence but he has a world-class sneer and a vicious temper. He is supported by a horde of semi-human sycophants and swamp-dwellers that are terrified of his grunts and lunges. Dumbocracy is here to stay for the foreseeable future.

Share

Expressing Mind

Expressing Mind

In The Expression of Emotion in Man and Animals Darwin goes into great (indeed excruciating) detail about the ways emotions are expressed in the body—the face, the voice, the hands, the posture. He leaves no doubt that animals and man express their emotions in characteristic bodily configurations, particularly facial expressions. But he never discusses whether thought likewise has a bodily expression; in fact, he doesn’t even bring up the question. Why not? Evidently because it has none: thought itself does not contort the face in a specific way (the facial muscles remain relaxed) and particular kinds of thought don’t correspond to different types of facial expression. Only when thought proves difficult does it shape the human face, but not when it is proceeding unimpeded. A book called The Expression of Thought in Man and Animals would be very short, indeed non-existent. This is why we cannot read a man’s thoughts from his face, but we can detect his emotions this way. Thought is unexpressive. But that fact cries out for explanation: why is the emotional part of the mind so physically expressive but not the cognitive part? What makes emotion so prone to externalization but not thought? Why the link to muscles in the emotion case but not in the thinking case? (And why did Darwin never make this point?).

It might be said that Darwin’s text provides the answer: we inherited our emotions from animals that made use of the bodily expressions they produced, but we didn’t inherit our capacity for thought from them—so we didn’t get anything like the whole feeling-behaving package. If animal emotions didn’t come with natural expressions, then we would not have such expressions either—for they have no real utility in our lives. But thoughts originate with us, it may be said, and hence don’t carry any such animal baggage. There are two problems with this explanation: it is implausible that we did not inherit the capacity for thought from our ape-like ancestors; and the same question arises for perceptual capacities, and we surely inherited them. Apes think and believe and know, but they too show no sign of these mental acts in instinctive facial expressions or gestures: and we inherited this trait from them. And perception is generally not accompanied by distinctive facial expressions or other bodily signs: your face doesn’t automatically change when you stop perceiving, or perceive something else. You don’t have one face for seeing red, say, and another for seeing blue. Nor does your face change its expression when you close your eyes or unplug your ears. Yet we surely inherited these senses from our animal ancestors, going back a long way. The fact is that perception and cognition are intrinsically disconnected from musculature, but emotion is so connected, intimately. Emotions naturally and forcibly express themselves in the body, often in puzzling ways, but not so for thoughts and perceptions. This seems intuitively correct, but it is theoretically puzzling: what is the reason for this asymmetry? Your face goes a certain way when you are angry or fearful or disgusted without the intervention of will, but nothing like this happens when you are thinking about, say, the meaning of life (or what to have for lunch), or seeing a tree in the distance. Your emotion makes your body do such-and-such, but your thought or perception doesn’t make your body do anything. Cognition leaves your body alone, but emotion messes with it (often pointlessly). It can be hard to hide your emotions, but your thoughts are naturally hidden (and can be difficult to reveal). In this sense thoughts are private and emotions are public—but why?

Can we imagine inverting the two—could there be beings whose faces contort with thought but not with emotion? Logically, that seems conceivable; humanly, it seems strange. Isn’t it in the nature of emotion to seek expression, but not so thought? Here is a possible explanation: emotions are episodic and transitory but thoughts and perceptions are always with us. Emotions motivate animals to act and they come and go with the events surrounding the animal, but animals are always perceiving and thinking (except when asleep). If there were a facial signature of perceiving and thinking, it would always be there—you would always be making a thinking and perceiving face (knitted brow, puckered mouth, perhaps). That seems like a complete waste of biological resources; better to let the face relax while you think and perceive. But emotions are responsive to the passing show and hence demand action—say, flight or aggression or consumption. Emotions are mental states we act on in the struggle to survive, but we don’t need to act on our every thought or perception. Hence, emotions are motoric, but cognition isn’t; cognition informs and guides action rather than peremptorily prompting it. Fear of a looming lion makes you run, but thinking about a lion doesn’t make you do anything. There is something to this explanation, though it needs some spelling out; but it doesn’t touch the hard question—namely, what is it about the nature of emotion, but not about the nature of thought, that suits it to have its behavioral role? Why is emotion built to be expressed, but thought isn’t? And how does emotion succeed in shaping behavior whereas thought does not? How come emotion is active in this way but cognition is passive? As we know from Darwin, emotion is elaborately expressive, dedicated to bodily manifestation, but other aspects of the mind are unconcerned about expression, expressively indifferent. The Cartesian mind is cut off from the muscles as a matter of its intrinsic character, but the affective mind (the Darwinian mind) is closely bound up with the muscles—indelibly movement-oriented. Darwin lovingly details the manner of this expressive dimension, but no such project attends the consideration of other aspects of the mind. True, animals can communicate their thoughts and perceptions in bodily action, especially if they have a real language, but this is not the same as the expression of emotion, which is not generally communicative. Having a behavioral effect is not the same as having a behavioral expression. Nor would be it be correct to describe general behaviorism as an expressive theory of the mind in the sense of “expression” intended by Darwin. In that sense we are dealing with an instinctive habitual bodily correlate not with an intentional action that may be withheld at will. This is why we can be surprised at the way our body is behaving under the influence of emotion; it takes close study to see how your eyebrows are behaving when experiencing certain emotions. Emotions reflexively produce bodily expressions of specific types, but the same is not true of thoughts and their behavioral effects (saying, for example, “I was thinking about going shopping”). So, the puzzle remains: why the difference? There is a kind of dualism at work here, but its rationale remains obscure.[1]

[1] What is called belief-desire psychology is completely oblivious to the distinction I am drawing, and has nothing to say about the kind of expressiveness in action that Darwin is interested in. Habitual facial expressions are hardly intentional actions, yet they are clearly things done. The agent does not have a reason to perform these actions (the body performs them). Emotional expressions and intentional actions may both be caused by states of mind, but it would be wrong to assimilate the two. The philosophy of action should really fall into two parts: the philosophy of intentional reason-based action, and the philosophy of instinctive expressive action.

Share

Anger

Anger

I read with interest Darwin’s discussion of anger in The Expression of the Emotions in Man and Animals (I am very familiar with that emotion, unfortunately). His discusses in detail the expression of anger in bodily posture, hand gestures, and the baring of the teeth. It made me think of Melville’s Billy Budd, a story of accusation and reaction (in case you haven’t read it, Billy is completely innocent of Claggart’s evil accusations). The climax is reached in this terse passage: “The next instant, quick as the flame from a discharged cannon at night, his right arm shot out, and Claggart dropped to the deck”. The blow kills Claggart—“A gasp or two, and he lay motionless”. Asked to explain his action (it is a capital offense), he replies: “I am sorry he is dead. I did not mean to kill him. Could I have used my tongue I would not have struck him. But he foully lied to my face and in presence of my captain, and I had to say something, and I could only say it with a blow, God help me!” Billy is duly executed for the crime, to no one’s satisfaction. Melville’s description is psychologically apt and could be added to Darwin’s list of physical symptoms of anger: Billy’s arm “shot out” as if automatically; his voice impediment prevented a verbal denial, so his motor system took over; its effect exceeded what Billy wished; it was quite predicable given the circumstances. This is what anger, justified anger, moral indignation, may lead to, especially in response to the malicious lie (it is surprising Claggart didn’t anticipate it). A primitive response of Billy’s nervous system triggered his lethal action as a kind of reflex—childlike, maybe simian. Anger obviously has deep roots in the animal mind and excites extreme expression. It is not easily managed. It is wise not to evoke it in others. Its surest cause is evil. Billy Budd’s young life is cut short by the laws of emotional expression in the human animal. Claggart got what he wanted, though he paid with his own life.[1]

[1] I discuss Billy Budd in more detail in my Ethics, Evil, and Fiction (1997), chapter 4, “The Evil Character”. Billy is a naïve young man, full of life and promise, a “bud” of sorts, soon to be nipped by an evil authority figure.

Share

Animal Respect

Animal Respect

The modern animal rights movement is now over half a century old. Someone should write a history of it. I will venture some alternative history. I was in on it from the beginning, but not at the very beginning. That can be traced to the book Animals, Men, and Morals, edited by John Harris and Rosalind and Stanley Godlovitch, published in 1971 (followed a few years later by Peter Singer’s influential Animal Liberation).[1] I thought at the time that the arguments (and facts) there proffered would be soon adopted by philosophers and interested others. But they encountered resistance from an array of moral philosophers and had to fight for recognition. This was disappointing for us activists. Still, some progress was made, but more slowly than was hoped and expected. What was once regarded as eccentric, even barmy, gradually became mainstream and respected—that was something, given the prevailing attitudes in those early days (the word “vegan” was unknown back then). But suppose things had gone differently: suppose much greater progress had been made, and large sections of the population had got the message. Let’s imagine that in a few short years the majority of people had seen the light: meat consumption was way down, there were vegetarian restaurants everywhere, fashionable people wore Animal Liberation T-shirts, etc. The arguments were so strong, so clear, so unanswerable, that most people went along with them, acting accordingly. But suppose a minority of people refused to join the majority—they stubbornly clung to the old ways. Suppose this minority were geographically separated from the majority—the north of England compared to the south, say. By this time the issue had gone political: politicians campaigned on a platform of animal rights, or animal non-rights. You were either pro-animal or anti-animal. Things could get heated, tempers flared, the country was polarized. The government, duly elected, tried to impose new laws regarding animals, outlawing the practices of the north (e.g., factory farming). There was talk of making meat-eating illegal. The minority were being pressured to conform, and they didn’t like it (they were morally wrong, but politics is another matter). Suppose they put up armed resistance and even intensified their animal abuse (as it was seen in the south). There was a danger of civil war; already there was a fair amount of violence and social unrest. Families were split, friendships shattered. There might even be a civil war fought over the issue, with mass casualties. It might have spread to other countries. This could all have happened, if the original architects had got their way—remember that, according to them, our treatment of animals is an atrocity, comparable to other historical atrocities. The result might have been victory for the abolitionists and an ethical society where animals are concerned. What if all the philosophers, along with other intellectuals, had been persuaded by the animal liberationists, and that this had accelerated the spread of the new ideas? To many of us at the time it was surprising that this didn’t happen—because a lot of otherwise sensible people were simply not having it. To them animals had no rights, no moral standing, were made to suit our human purposes (the animals should be glad of factory farms, or else they wouldn’t exist at all!). It seems historically contingent that the scenario I sketched didn’t occur—all-out civil war. For people are apt to be vehement on the question and refuse to budge—the arguments I have had!

What did happen was different—a kind of slow diffusion. Steady progress, piecemeal reform, a general raising of consciousness. In my lifetime there has been a transformation on the issue. It is amazing now to a find a vegetarian section in the supermarket. Perhaps we are lucky that more people didn’t instantly convert back in the early days, or else a societal split might have been the result. Moral progress is apt to be slow and that may not be a bad thing all things considered. It takes time for the human mind to adjust, for the moral truth to sink in. The Animalist Revolution never occurred, so we were spared its potential convulsions. Yet progress was made and no doubt will continue to be made. What if artificial meat becomes more widely accepted, tastier, cheaper, healthier, less environmentally damaging than natural meat? Then we might see a gradual phasing out. We will have a de facto victory of the ethical over the unethical. Compare the issue of slavery: suppose opposition to it had never reached the critical mass necessary to triggering the Civil War, so that that war never occurred, with consequences still visible today. Suppose instead that slavery simply withered away as technology developed, people grew more enlightened through education, etc. It would take longer to achieve the right result, but at least we would be spared the violence of a full-on civil war. This is speculation, of course, but you see my point: in the case of animal rights, we never got a civil war over the issue, but we could have. Wars have been fought over less. Historical change has not (hitherto) required anything so disruptive or deadly. I don’t doubt that if animals were capable of joining humans in bringing about better treatment, we would have had something like a civil war, because the issue is polarizing. Actually, great progress has been made in the ethical treatment of animals since (say) the nineteenth century, thanks to an enlightened few (the myopic majority will always be with us, regrettably). Overall, I’m quite pleased with the way things are turning out for animals, compared to the bad old days—though I would be the first to agree that progress is painfully slow and halting.[2]

[1] I am not counting such works as Anna Sewell’s Black Beauty (1912), the greatest animal rights book ever written.

[2] Enough time has passed for there now to be many second-generation vegetarians (I met one the other day) for whom an enlightened attitude towards animals is second-nature; these people are the ones to look out for. Times do change.

Share