Freud Generalized

Freud Generalized

Freud’s psychoanalytic system was built around the idea of sexual repression. Sexual taboos expressed as societal pressures lead to the repression of the sexual instinct, resulting in distinctive psychological consequences. These include: neurosis, sexually charged dreams, dirty jokes, the artistic drive, and a general feeling of malaise. The basic mechanism is the repressive act applied to sexual desires. This leads to sublimation and symbolic release. The libido is distorted and thwarted, affecting mental health. So the story goes. The question I want to ask is whether this theory can be applied to other sorts of instinctive desire—specifically, the desire for food. As things stand, there is not much of a taboo about eating: we can eat what we like when we like with no shame attaching. There are exceptions such as dietary prohibitions of various sorts: kosher food, not eating cows, not eating meat in a vegetarian household. But they are not extreme enough to match the kind of sexual repression Freud was talking about, so I will need to invent a thought experiment. This is not difficult: picture a civilization that enforces many kinds of food prohibition, with shame and punishment as deterrents to indulging one’s natural food preferences. Suppose hot food is prohibited, perhaps for religious reasons, along with apples, oranges, and bananas. No butter on bread, nothing spicy. People desire these things, but it is deemed sinful to even think about them or talk about them. Children are brought up to feel shame about such desires and are punished for indulging them (buttered bread is deemed particularly heinous, the sign of a corrupt soul). Religion gets in on the act, predicting hell for violators. It’s a heavy trip, man. Meanwhile, we can suppose, sex gets a free pass: here you can do whatever you want—promiscuity, masturbation, even a bit of incest. Anything goes—you are actually admired for your sexual “impurity”. There is no sexual repression at all. According to Freudian theory, none of the consequences of sexual repression will apply in this society: no sexual neurosis, no sex dreams, no dirty jokes, no artistic sublimation, lots of erotic happiness. However, again according to Freud, food consumption will be surrounded by the untoward effects of repression, because repression attempts to control instincts that strive for free expression. You badly want butter on your bread, or some hot soup, but your society abhors such culinary sins—you were spanked as a child for breaking these rules and would be despised as an adult if you succumbed to said desires. Consequently, you are a food neurotic, plagued by food dreams, are always telling “dirty” food jokes, and feel pretty lousy in the eating department (all that not getting to eat what you want to eat). You have a seething food unconscious, pressing for release. The basic psychological law that Freud discovered (allegedly) is that repression necessarily leads to such symptoms of self-denial. Desire seeks release (hydraulicly so) and any attempt to thwart it spills over into untoward psychic perturbations. That is just the way the human mind works: it is a psychological law that repressed desire produces the kinds of effects noted. So, repressing food desires will produce the same kinds of effects as repressing sexual desires, should it occur. It doesn’t occur much with us, but it could occur in a possible society; in this society there will be a need for psychotherapists to work on people’s repressed culinary desires. And the same mechanism will work on any natural desire that is thwarted and repressed—even the desire to pursue one’s scholarly interests in peace. You might be publicly shamed for working on the mind-body problem, for example (so you only do it in your dreams in a disguised form). Freud’s theory is not limited to sex but applies to any kind of desire that receives the taboo-repression treatment. How could it not?

How should we respond to this point? One response would be to say, “How interesting, Freud’s theory could apply to the case of food, with no loss of plausibility!” It was only contingently about sex, despite appearances. On another planet, it might be about food, or even scholarly interests. A second response would be to say that there must be a difference between the food case and the sex case, because it doesn’t sound right to extend it to the case of food. How could food lead to such drastic psychic deformations? Wouldn’t food prohibitions just lead to a lot of conscious discontent, not the formation of a culinary unconscious with its attendant psychological ramifications? So, there has to be a difference between the two cases—there has to be something special about sex. Is sex perhaps the stronger desire, the more pressing? (Try telling that to someone who has been fasting for three days). Thirdly, it might be concluded that Freud’s theory has to be wrong about sex precisely because it is clearly wrong about food. The cases are exactly parallel, but surely the Freudian consequences would not obtain in the food case—just a lot of griping and rule violation and black-marketing. I incline to this third view, but we don’t have any solid empirical data, so I must remain agnostic, as a good scientist. Still, I am morally certain culinary Freudianism stands no chance of being true, but ought to be true if Freud were right about sex. That’s not how the mind works.[1]

[1] It might be said that there is a deep difference between the food case and the sex case, namely that sex is inherently shameful while eating is not. Sex should be repressed, but not so eating. Extreme puritans would contend as much. Thus, sexual desire is necessarilyapt for repression, even without societal pressures. We need to suppress our sexual desires or they will devour us, wrecking civilization. The conscious mind cannot bear the weight and fire of human eros, so the formation of a repressed sexual unconscious is entirely natural. That is not a Freudian view, nor the current opinion on such matters; but if there exists a deep acceptance of it in the human psyche (whether true or false), that would explain our asymmetrical attitudes towards sex and food. For it does seem odd that we are so ready to believe the Freudian story about sex but smile at the idea of a food unconscious, or a scholarly unconscious. Sex seemsspecial, but exactly why is unclear. Is there something intrinsically evil about sex, but not about food? Does violence, say, lurk at its heart, or contempt, or competition?

Share

Perceptual Intuition

Perceptual Intuition

Perception and intuition are usually opposed to each other: what is perceived is not intuited and what is intuited is not perceived. The senses perceive and reason (intellect) intuits. We know material objects by perception and abstract objects by intuition. Empiricism declares perception to be the basis of knowledge (and the criterion of existence); rationalism declares reason to be the basis of knowledge (and the measure of reality). Intuition and rationalism go together; perception and empiricism go together. There is a grand dichotomy: intuition doesn’t encroach on perception’s territory, and perception doesn’t encroach on intuition’s territory. It is true that Kant spoke of sensory “intuitions”, but he is an exception—and anyway he didn’t mean to claim that perception is a species of rational intuition in the style of classical rationalists (what he did mean is open to interpretation). The pivotal point is that perception and intuition have been taken as separate and distinct—indeed, as mutually exclusive. I will argue that this is wrong, deeply so.

The operative considerations are not unfamiliar, but their significance has not been fully appreciated. A seen object presents a surface to the eye; it doesn’t present all its surfaces (or its interior). It has a front and a back. The back is not visible. The viewer has no sense-datum of the back side of the object. Yet the hidden side doesn’t go unrecognized; it isn’t omitted from the viewer’s total perceptual experience. He knows it is there, as much as the facing side. We might say that he has a sense of it but not an impression. A being with eyes on stalks that can view the object from all angles would have an impression of all aspects of the object; there would be no need to fill in the gap with…what? What word should we use? Should we say postulation, or imagination, or hypothesis, or inductive reasoning? These all sound too intellectualist, too deliberate, though not without a certain suggestiveness—an extra mental act has to be performed beyond mere retinal responsiveness (the proximal stimulus). The given must be supplemented somehow. I think the best word is intuition, defined as follows: “the ability to understand something immediately, without the need for conscious reasoning”; “instinctive” (OED). We can paraphrase this as, “the ability to know something instinctively without explicit rational thought”. The emphasis is on the pre-rational, automatic, pre-conscious, implicit, unreflective, non-conceptual, taken-for-granted, primitive. Little children can do it, also animals. It is probably largely innate. Clearly, it is vital to survival (you have to know that things have unseen sides). We can add it to Russell’s “knowledge by acquaintance” and “knowledge by description”—this is “knowledge by intuition”. We are not acquainted with the back sides of objects, but neither do we infer them by discursive methods (“the hidden side of the object whose front side I am now acquainted with”). Intuitive knowledge is something different from acquaintance knowledge and descriptive knowledge, as conceived by Russell. It is a step up from mere acquaintance and a step down from conceptualized description. Thus, we can say that ordinary perceptual experience involves intuition as well as sensation. In fact, it would be possible for intuition to exist in the absence of sensation, as in a case of blindsight: someone might intuit an object by means of the eyes and be incapable of receiving visual sensations—seeing would be nothing but intuiting. As things stand, however, seeing is an amalgam of the two—part sense-datum and part intuitive apprehension. It has a kind of dual intentionality. Intuition is integral to perception, pace the empiricists. We might say that they were guilty of an aspect-object confusion: they thought the perceived object was nothing but the aspect presented, forgetting that objects are seen as having hidden aspects too. The perceptual is infused with the intuitive, and to that degree overlaps with other sorts of intuitive apprehension, as with apprehension of numbers. A being with all-inclusive vision (eyes on stalks) might view normal human and animal vision as decidedly in the intuitive category along with other varieties of intuition, and to that degree epistemically suspicious. What is this “intuition” that outstrips good old-fashioned seeing—the kind where you have sensations of the whole object. Now that is the type of seeing you could rest a whole epistemology on! That would be true empiricism, not this semi-intuitive nonsense—what even is intuition? For these beings, there is only pure sensation (acquaintance) and conscious reasoning therefrom, not a peculiar kind of intuition that steps in to take up the perceptual slack. These beings are hyper-empiricists, the genuine article.

How about mathematical intuition? This is a large subject, but let me focus on thinking of the number 2. We don’t normally think of this as a type of sense perception; we think of it as pure intuition—“mathematical intuition”. It could be performed in the complete absence of any sensory materials. But in the human case this is clearly not so: such intuition comes surrounded by sense perception. I think of the number 2 when I see or hear the numeral “2”, or when I see two chickens cross the road. Could I think of that number in the absence of any such experience? It’s hard to say, given that we are deeply sensory beings—though not exclusively so. As a matter of psychological fact, we think of numbers with the aid of sensory material—as we perceive objects with the aid of intuition. Numbers present themselves to our mind under a dual guise—abstractly and concretely, rationally and perceptually. In particular, numerical symbols play a vital role in mathematical thought, which is why advances in mathematical notation were advances in mathematics itself. There is a reason why mathematical formalism is an attractive doctrine and Platonism feels like a stretch—it is easy to commit a sign-object error in mathematics. It is as if numbers come disguised as numerals and we have to see through the disguise. Thus, mathematical intuition is drenched in sense perception; it is partly “empirical”. The end is abstract but the means is (partly) concrete. We don’t apprehend numbers in the complete absence of sensory experience. Pure rationalism is not psychologically realistic. Thus, in the human case intuition needs perception, as perception needs intuition. This may not be the ideal situation, epistemologically, but it is the actual situation.

Accordingly, classical empiricism is not true and classical rationalism is not true. We need a mixed epistemology. What should we call it? Rational empiricism? Empirical rationalism? The trouble is both terms are tainted with the same mistake, i.e., exclusiveness. The point of the view I am suggesting is that traditional epistemology is too dichotomous, so we need a more inclusive unitary label. We could try “intuitionism” but that already has an established use and fails to capture the sensory element. I have toyed with “quasi-intuitionism” and “experiential intuitionism”, but for various reasons don’t like them much. The best I have able to come up with is “general intuitionism”: it captures the idea of extending intuition into the theory of perception, thus unifying epistemology; and it echoes Einstein’s “General Theory of Relativity”, with its attempt to integrate apparently different domains. The thought is that intuition (in the sense defined) is more ubiquitous than has been supposed, more fundamental; it’s everywhere. Perception is not an intuition-free zone, capable of standing apart from other areas in which intuition has been taken seriously (mathematics, ethics, logical analysis). We don’t need to preach the prevalence of perception in the theory of knowledge; it has had enough propaganda on its behalf already. So, “general intuitionism” it is. The idea is not to claim that the concept of intuition will reduce the field of knowledge to something more natural or better understood; there is plenty about intuition that is obscure and ill-understood. But it is real and theoretically indispensable. It is a biological fact about the human (and animal) mind, akin to creativity and problem-solving. It is obscurely linked to imagination. In epistemology, it serves to overcome a simplistic dichotomy that has plagued the subject—the dichotomy between sense perception, on the one hand, and rational thinking, on the other. These are not as disjoint as has been supposed, though they are clearly different in many ways.[1]

[1] Intuition was not a concept in good standing with classical empiricists and rationalists, because neither theory can find room for it within their official platforms. Empiricism finds it an embarrassment on account of its non-sensory character—it seems like an upsurge of rationalism at the heart of perception. Rationalism doesn’t care for its instinctual animal character, its bypassing of conscious calculated reason—an upsurge of the primitive in the rational soul. Intuition makes man an intuiting being as well as a thinking being—a sort of spontaneous leaper in the dark. How can it be rationally defended? Knowing things intuitively seems to the rationalist like not knowing them at all—a kind of guessing. The empiricist, for his part, balks at the foundation of knowledge presupposing resources not derivable from brute sensation. Thus, empiricism and rationalism are constitutionally anti-intuitionist. Intuition represents an epistemological viewpoint alien to them both. To me it seems like a rich field for future investigation, not something to either contemptuously discard or flakily celebrate. I look forward to the Journal of Intuition Studies.

Share

Embarrassed Empiricism

Embarrassed Empiricism

Let empiricism be the doctrine that all reality is observable, in principle if not in practice (that last qualification covers a multitude of sins). There is no reality but observable reality, i.e., what is perceivable by the five human senses, particularly vision. This is surely the main dogma of empiricism. The doctrine can be weakened so as not to claim that all reality is observable, but only certain sectors of reality, such as material reality. Historically, Plato may be viewed as the arch foe of empiricism, since he held that REALITY is never observable: the world of Forms cannot be perceived by the senses. What can be revealed to the senses is not real but merely apparent. For him, reality and the senses are disjoint domains. Aristotle moved away from this in the direction of empiricism, and later philosophy followed suit. Modern empiricism contends that what is real (really real) is what the senses present to us; the more removed from the senses the less real things become (“logical fictions”, “posits”). Knowledge rests on observation, so that in its absence knowledge becomes questionable. Meaning, too, is supposed to depend on sense experience. Almost everyone these days is an empiricist in this sense: reality and observation make contact at some point and generally overlap. No one thinks that reality is never observed; no one believes that the touchstone of reality is imperceptibility. That would be a radical anti-empiricism: to be real is to be not observable. I am going to argue that this doctrine is actually true.

Empiricism has always been in retreat from its main tenet. An exception had to be made for experience itself: itis not observable (the unobservability of observation). We don’t see seeing—in ourselves or others. Yet it is deemed real by the empiricist. How could it not be if it is the test of reality? But it violates the main doctrine: it eludes the senses. Locke accepted that the minute corpuscles that constitute matter are not perceptible yet are entirely real. In Berkeley’s system spirits, finite and infinite, are not objects of perception, but are ontologically fundamental; and ideas are mental entities, and hence not observable by means of the senses. Hume thought we had no sense impression of causal necessity, but he didn’t think this counted against its reality. The world is one thing, sense experience another. The logical positivists made verification the measure of meaning (and hence existence), but allowed for indirect verification of various kinds—the past, the future, the remote, the microscopic, the dispositional, the counterfactual. Many things are real but don’t admit of direct observation, even in principle. Newtonian gravity supplied an instructive case: it was real and scientifically basic but completely unobservable (hence “occult”). We can perceive its effects but not the force itself. Indeed, the concept of force, central to physics, violates the empiricist principle (like the concept of law), because forces are not potential objects of perception—and were therefore viewed with suspicion by empiricist physicists. In fact, all the postulates of physics have slowly moved into the category of the unobservable, notably fundamental particles. How often have we heard that atoms are not little solid extended objects but packets of energy, nodes in a force field, mere potentialities? It is not just the fact that they are tiny that makes them hard to touch and see but rather their inner nature—they are not the kind of thing our senses can latch onto. Yet they constitute the things we can observe (as we naively suppose). Thus, we get Eddington’s two tables: the table of commonsense and the table of physics—the latter being the true reality. The table of physics is held not to be observable at all. Nor is any so-called physical object, even very big ones. In other words, science itself, an empirical discipline, has concluded that the physical world is not observable, except indirectly and misleadingly. It may cause our inner perceptions but it isn’t perceptible—revealed to sense perception, transparently given. The manifest image and the scientific image have fallen apart—the former is not an accurate guide to the latter. Hence, the real is not coterminous with the observable; it is purely “theoretical”. The objective physical world is not, inherently, an observable world. Much the same was held by sense-data theorists: we observe sense-data but we don’t observe their external causes; but these causes constitute the physical world as it really is.

We have now reached a rather startling conclusion: nothing is observable! We knew that mathematical and moral reality are not observable, and we are easily persuaded that minds are not observable,[1] or causation, laws, necessity, time, space, the infinite—but now we are told that nothing physical is perceptible either. This is not conducive to empiricist principles. The real is the opposite of observable; the touchtone of reality is unobservability! It may be replied that this must be an overstatement, because we can surely observe trees, mountains, animals, our own body. But no, these things are all made of unobservable entities, so are not really perceivable as they objectively are. They are illusions of a sort—products of our senses and mind. They arise from the interaction between mind and world; they don’t exist in objective reality but are a kind of projection. There are no colored objects in objective reality, or solid objects, or objects that persist through time whole and entire; there are just collocations of basic unobservable particles. Again, these are familiar reflections; my point is that they fly in the face of what might be called “naïve empiricism”. According to the world-view just outlined, the real is unobserved while the unreal is observed. Taken together, we obtain a picture of reality radically divorced from the human senses—absolutely nothing is observable in the sense that empiricism supposes (though we may allow for a looser use of “observation” to describe our evidence-seeking activities). Perceptions may be signs of real things, but they are not of real things (except in a weakly de re sense). Our senses do not reveal or display or describe or picture reality as it is in itself; they merely provide simulacra, correlates, indications. This is not just the old point that all observation is theory-laden (which is probably false); it is the more radical claim that observation and reality don’t match, dovetail, coincide. Reality is not such that the senses can get a firm grip on it (perhaps a slippery grip is possible).  This is a general—indeed, universal—truth about reality: it is never given to the senses, never the direct object of a perceptual act. Our senses, derived from the senses of our animal ancestors, are just not set up to deliver reality as it really is. They are not veridical in the required sense: they don’t disclose things as they are in themselves; they distort and mislead and under-describe. No doubt there are solid evolutionary reasons for this. There are empirical reasons for why empiricism is unlikely to have much truth to it. In fact, it is plausible to suppose that empiricism is a remnant of a prescientific religious age in which the human mind is tacitly understood by reference to the mind of a supposed omniscient supernatural being. If empiricism were true, theism would have to be; but as it isn’t, it ain’t (as the saying goes).

It might well be protested that this leaves us bereft. A generalized rationalism cannot replace the lost empiricism, because rationalism can’t cover the region we think of as “empirical” and was never intended to. Our knowledge of the empirical sciences can’t be founded on rational intuition alone; it needs the senses to deliver evidence. It is just that evidence cannot be conceived as veridical revelatory observational episodes. So, we have no satisfactory epistemology to speak of. Evidently, we have to conceive of the relation between experience and fact differently—not as revelation but as correlation (or something like it). Experience is correlated lawfully with reality, permitting us to draw inferences from the one to the other. It isn’t that the objective world is entirely noumenal; it just isn’t perceptible in the way classical empiricism supposed (it could be perceptible in other weaker ways). Appearance and reality correspond, but they don’t coincide. No doubt there is an element of mystery about this correspondence relation, but mystery is better than error (as the prophet said). Empiricism is far too optimistic about the world-experience nexus—far too unmysterious—but we may have to accept that its apparent clarity is delusive. The structure of human knowledge is much like that envisaged by Plato, which is not surprising given the affinity between Plato and Kant (as recognized by Schopenhauer): reality as essentially imperceptible by the senses, the built-in limitations of sensory observation, yet a mysterious correlation between the two. The difference is that Plato’s Forms are replaced by unobservable entities of various kinds—the whole of reality in fact. We might call this “scientific Platonism”, in honor of that towering anti-empiricist. Basically, the senses are stuck in a cave. We have to reason our way out of the cave in order to make epistemic contact with reality. The senses can do nothing without that kind of reasoning to back them up. That is the big picture, the grand vision—with seventeenth century empiricism and its aftermath a mere historical blip. We have to reinvent Plato.[2]

[1] They are not observable in others and not observable in ourselves. Introspection (whatever it may be) is not a form of observation—we don’t apprehend our own mental states with our senses.

[2] It appears to me that philosophy has been retreating from seventeenth century empiricism since its very inception—yet desperately trying to hang on to its core dogma. All of it has been shaped by the legacy of empiricism and the retreat from it—epistemology, metaphysics, philosophy of language, philosophy of mind, ethics. Thus, the work of Russell, Frege, Wittgenstein, Moore, Quine, Strawson, Ayer, Austin, Kripke, Davidson, Dummett, Chomsky, Popper, Husserl, and many others. The time has come to abandon it entirely, give it the boot, stop trying to salvage it, acknowledge its utter bankruptcy. It has exercised far too powerful a hold over the philosophical (and scientific) imagination for too long. We need a post-empiricist philosophy, one not obsessed with that outmoded school of thought devised some three hundred years ago in the British Isles (hence British empiricism—all too British). It isn’t compulsory; it isn’t a religion. From whence does its hold derive? Probably from the primitive feeling that sight and touch are our primary ways of relating to reality, particularly the mother. We can’t let go of the feeling that if we lose empiricism we lose our mother, our ultimate source of security in an alien world. Surely, she is directly observable! Surely, she is the basis of all that is good and wholesome and life-saving! Something like this anyway, because the psychological roots of empiricism clearly run deep. Losing empiricism is uncomfortably close to maternal deprivation. Our brain is naturally set up to accept empiricism, implausible as it may be. We have been imprinted on it.

Share

George Soros and Me

George Soros and Me

George Soros is now 92 years old. I first met him at his home in Bedford, New York, in 2007, when he was one year older than I am now, at his invitation. It came about as follows. Robert Silvers, then editor of the New York Review of Books, had asked me to comment on an article written by Soros that had some philosophical content. I did so, dashing off twelve points one morning: there was a lot wrong with the philosophy, though the rest of the article was okay. Silvers accordingly turned it down. A couple of weeks later I received a handwritten letter from Mr. Soros asking me if I would like to visit him in his home to discuss the matter. Mainly out of curiosity about the man, I agreed; it wasn’t the philosophy that intrigued me. I told his assistant that I wanted to fly first-class with my wife, on the principle that if a billionaire wants you to come to see him for his benefit, he shouldn’t just provide economy. He did so. I knew little about him except that he was a wealthy financier. Meanwhile he sent me a copy of a book he was writing which went into the philosophy in more depth. It wasn’t very good, though the central idea of reflexivity wasn’t mistaken (on this more later). I rather dreaded our conversation, because I didn’t know how such a man would react to stern criticism. The meeting was scheduled for the following afternoon, after tennis in the morning followed by lunch. He struck me immediately as welcoming and jovial (to a degree), ready for honest discussion. I remember remarking that the only other property I could see from his house, which commanded a lofty view, was a single house in the far distance. He replied that that also was his house, now occupied by his ex-wife, and was where we would be playing tennis on his indoor court (he also had an outdoor court next to his current house, which he seldom used). The present house, previously owned by Michael Crichton, had been bought by his ex-wife, probably with an eye to divorce; it was she who had acquired the paintings by Picasso, de Chirico, Chagall, and others. He observed to me at one point that he gives away five hundred million dollars a year.

We duly played tennis (I with a coach) and had lunch. The Chancellor of Austria attended with a security detail, for reasons George couldn’t explain. At one point Al Gore telephoned. The philosophical conversation went well: he was receptive to correction, easy to talk to, obviously intellectually able. I think he made notes. The rest of the weekend went swimmingly. I got to know his exemplary butler Howie (tall, Canadian, nice), who I would see a lot more of in future. We parted amicably the following day. The next time we met was in St Barts just before Christmas and soon after our first meeting, where George had rented a house. There was a lot of tennis, windsurfing for me (arranged by Howie), fancy dining with other guests, and some private philosophy talk. I remember a restaurant in which table dancing was routinely performed—all joined in, including George. It was good fun. Then in the summer following he invited us to his Long Island home in Southampton, again with other guests. I had a tennis coach to myself (Ziggy), Howie was in attendance, the food (French chef) was excellent, we went to lunch with Tom Wolfe and others. It was all exactly as you would expect. This became a regular invitation and even started to bore me a little (but I always had Ziggy). At my suggestion, Martin Amis and his wife Isobel were invited to come over for dinner one day. I played tennis with Martin on the private court. George reported that he didn’t get on too well with Martin but enjoyed talking to Isobel. On one of these visits George told us a joke: “A Hungarian and a Rumanian will both sell you their mother–but the Hungarian will deliver”. He put up some resistance to gay marriage, despite his progressive tendencies, in opposition to my urging (Obama was also slow on the point). We became friends. I also stayed at his Fifth Avenue apartment (palatial is the only word) a couple of times; he didn’t know how many rooms it had and noted that it was “too big for one man”. We also went to St Barts at Christmas a few more times, which also began to grow tedious for me, especially since the tennis was scarce and the restaurants rigorously French. In any case, life with George Soros became a fixture, a habit, part of my normal existence. I was part of his close circle. When people asked me how I knew George, I would say I was his mentor, and he didn’t demur. We had many lengthy intimate conversations. There was a good deal of mutual respect and affection.

A few years into our relationship he asked me to accompany him to Budapest and introduce the first of a series of public lectures he was giving there. He flew me and my wife over, business class, and put us up in a fine hotel. His future wife Tomiko was also there, who I already knew well. He asked me to comment after the lecture (broadcast all over the world) on his concept of reflexivity—we had talked about this a good deal, with me trying to impose clarity on his messy formulations. I discovered after a little research that the same idea, called the “Oedipus effect”, had been stated by Karl Popper in The Poverty of Historicism, who had been George’s teacher at the London School of Economics, and was an acknowledged influence on him.[1]Evidently this concept had been absorbed by Soros as a young man and its origin forgotten. Somewhat horrified, I informed George of this fact on the very morning he was due to give his lecture and I to comment on it. Crestfallen is the word I would use to describe the look on his face when he read the relevant passage from Popper’s book (he was in his dressing gown in his hotel room). It was then my solemn duty to point this out after George had proudly spoken of his “discovery” of reflexivity (his name for the same idea) in his opening lecture. The air was thick with tension. Fortunately, I had devised a face-saving way out: I asked George to explain what he had added to the original idea. He replied that he had applied it to the financial markets (with spectacular success), which Popper had not done. This reply saved the day, though the moment was pretty excruciating. I also made up an impersonation of George’s style of lecturing that greatly amused his fiancé (“At this point I was making a billion dollars a day, but this did not satisfy me, so I decided to change the world”). In any case, none of this affected our friendship, though I suppose it put a big dent in George’s intellectual aspirations. In the book that came out of the lectures he wrote in the preface, “I owe a debt of gratitude to Colin McGinn…for clarifying certain philosophical points”. I might restate this by saying that I saved him from several embarrassing philosophical errors by criticizing what he had sent to me to read and comment on. And let us note that I was never paid for any of this time-consuming work. We carried on seeing a good deal of each other. I went to his extravagant 80th birthday party and also to his even more extravagant wedding to his current wife Tomiko. It was at this latter event that I told George’s joke to Al Franken, who unsmilingly pronounced it “a good joke”—an encounter with an eerie sequel. I also at this time became good friends with George’s youngest son Gregory, about whom I will not speak further, except to say it was a balm to me in future years.

Not long after this I faced the allegations that became public knowledge. I cannot make any comment for legal reasons on the merits of any of this; suffice it to say that none of it affected my relationship with George. He even offered to write a letter in support of me. However, six months after the initial contretemps the matter became public, even appearing on the front page of the New York Times. The annual invitation to Southampton was not forthcoming. Nor was it ever repeated. Nor have I seen or talked to George Soros since that time (2013). I was completely cut off. No reason was given. Nothing was explained. I happened to speak with Howie once, but he could offer no explanation to me (not permitted in a butler). (I was, however, still friends with Gregory.) Was this hurtful? You bet. Was it disappointing? Unquestionably. Frankly, I was amazed. I don’t really know why it happened and can only guess. But I won’t guess here, although I will say it is hard for me to believe that George thinks I did anything to deserve that kind of treatment. Was I just no more use to him after I put paid to his philosophical ambitions? Our friendship certainly had its transactional side. This was perhaps the most spectacular of the interpersonal implosions to which I have been subjected. What happened to that “debt of gratitude”?

[1] The basic idea is that prediction can influence the course of social events—not so physical events.

Share

De Re Necessity Reconsidered

De Re Necessity Reconsidered

The necessity of origin is a beguiling thesis, instantly plausible. It sounds right. Queen Elizabeth II was necessarily born to her actual parents; in no possible world does she have different parents. If she exists in a world, so do her parents, dutifully giving rise to her. If she exists, those two people must have shagged her into existence. But we need to examine more carefully what the content of the claim is: what exactly do we mean by “parent”? The natural interpretation is that the person Elizabeth necessarily derives from the persons who actually produced her. Thus, we have the (interesting) thesis that persons necessarily arise from specific other persons. The origin of a particular person consists of a pair of other persons. However, this claim is demonstrably false, as we can see by considering brain transfer cases. Suppose Elizabeth’s parents had their brains replaced by other people’s brains so that the resulting person is not identical to the original person; then she would have been born to different persons yet still be Elizabeth (that person). She would have come from the same sperm and egg, but had different persons as parents. In that possible world her actual parents (quapersons) might never have existed, yet she would, because the same human bodies exist in that world and do their procreative thing. If my parents had been body-snatched a few weeks before I was conceived, I would still exist, even though the persons of Joseph and June would not be my parents (they had long gone before I was born). Result: persons don’t have persons as their necessary origins—no person necessarily originates in a particular pair of persons. There is no such de re metaphysical necessity linking persons with persons. It is not difficult to come up with variants on this sort of thought experiment—for example, cases in which your parents suffer such drastic psychological changes that they are no longer the same person as before. You still exist in a world in which this happens. The reason is obvious: the bodies of your parents still exist in these possible worlds. We can thus easily modify the thesis to accommodate these counterexamples: people necessarily come from the bodies they actually come from. I couldn’t have arisen from the bodies of my parents’ neighbors, say: that wouldn’t have been me, even if that individual looked and talked just like me. But isn’t that thesis in turn subject to the same kinds of counterexample? What if my parents’ reproductive apparatus had been grafted onto distinct human bodies and the same reproductive acts performed? I would still exist but from different bodies—those bodies would be performing the necessary acts. That is because it is the sperm and egg that really matter, not the body in which they happen to live.[1] So, we need to reformulate the thesis to register this fact: a given person necessarily arises form a particular combination of sperm and egg. Now we seem in the clear—but are we? What if the sperm and egg were replaced but the DNA left the same? We change the cellular vehicle but preserve the genetic passengers. Then we would have numerically the same DNA molecules but numerically distinct sperm and egg. Does that change the identity of the person that results? Apparently not: we just need to reformulate the thesis yet again—the person necessarily comes from a specific packet of DNA (the enclosing sperm and egg be hanged). This is a physical object admitting of the type-token distinction: I had to come from that token chunk of DNA—a distinct token of the same type would not be me (this person). A twin of me isn’t me. Are we now home free? Not quite: what if in a possible world a fragment of my actual DNA chunk has been chipped off? That wouldn’t be the same DNA chunk (aggregate) and yet I might still exist in that world. How much of the origin object can be lost before the person ceases to be? None of this is as straightforward as it seemed at first when the example of Queen Elizabeth II was paraded before us. The necessity of origin is murkier than we thought, with different implications for the metaphysics of persons; it is more recherche, more obscure. Though not simply false. It has an analytical depth that wasn’t immediately apparent. There is a fine structure to it. We might say that the correct thesis is that persons necessarily come from their micro-origins, not from macro human beings.

We can parallel the above discussion for tables and pieces of wood. It sounds right to say that this table necessarily came from a particular tree; any table looking like this table but coming from a different tree wouldn’t be this table. But that can’t be quite right once we take account of certain operations on trees, such as splicing. Suppose that in a possible world the piece of wood that composes this table had been spliced onto another tree not the one it is joined to in the actual world. Then the table would have come from a different tree but from the same piece of wood. Well and good, then let us restate the thesis to acknowledge this possibility: the table necessarily comes from that particular piece of wood, no matter what tree it belongs to across possible worlds. Does that imply that it must come from the same bunch of atoms? Couldn’t the same tree part have been composed of different atoms (like an animal body)? Sure, so let’s make that explicit—we don’t mean the table had to have been composed of the same atoms, though it does need to come from the same (atom-independent) piece of wood. Again, this is starting to lose some of its initial clarity as a modal claim, though perhaps it deepens the interest (it’s not the lump of matter that counts). And we still have questions about how much of the piece is necessary in order to get the specific table in question—how much can be chipped off or replaced. Surely it doesn’t have to be the whole piece. Could the table exist but be an inch shorter because the piece of wood didn’t stretch as far as the actual piece? Things are murkier than we thought, less clear-cut (as it were).

What about necessities of natural kind? Is a particular cat necessarily a cat—could it (that cat) have been a dog or a platypus? Again, intuition is strong: no way that cat (any cat) could have been of another animal kind—in no possible world is a cat an elephant! But careful thought starts to blur the picture. A cat could certainly have had different properties from its actual properties—properties of location, food intake, color, etc. A small genetic alteration could make it bigger or smaller, changed its eye color, influenced its furriness. Cats come in breeds, so could a given cat have been of a different breed? It depends how breeds are individuated and how extreme the differences are. It could certainly be somewhat different phenotypically and still the same cat, but could a Maine Coon be a Siamese? Now intuition begins to waver and struggle: we don’t know what to say. Could this oak tree have been a beech tree? Could an octopus be a shark? What if the octopus’ egg were subjected to radiation and its DNA arranged like that of a shark, developing accordingly? Would that creature be identical to the octopus that now exists but in a shark’s form? Hard to say. There seem to be all sorts of gradations and weird cases; modal intuition turns soft. What seemed obvious at first now seems obscure, even meaningless. We go from clarity to cloudiness, confidence to diffidence. This doesn’t mean there aren’t clear cases—a cat couldn’t be a clock (an ordinary wristwatch). There is no possible world in which your pet cat is a Rolex! But what kind of argument could settle the difficult cases? They seem irresoluble. What does this tell us about de re necessity? Possibly this: there is such a thing as the indeterminacy of essence. Meaning has been held to be indeterminate, and quantum behavior too, but perhaps also necessity de re. There is just no fact of the matter about certain modal questions; modal reality can’t make up its mind about these questions, try as it might. The metaphysics of modality is therefore subject to inherent indeterminacy. Essence is real, but it’s blurry. At first it seems quite sharp and clear, but on closer examination it starts to wobble and darken. Possible worlds are not as well defined as we thought; some hover on the border of possibility. Necessity is not limpid and crystalline, much as we would like it to be.

I will end where I began—with origin. Are there also necessities of termination? Is it true that a person could only end with a single terminal offshoot? Consider the corpse: it results from an antecedent living organism, often a person. It isn’t identical to the person: the person is no more but the dead body lingers. Nor are the sperm and egg identical to the person. A person’s life is bookended by non-persons: eggs and corpses. Could a given person have a different corpse in another possible world? In this world the corpse is a certain dead body; in another world could it be a numerically distinct dead body? I think not: that body could not have come from a different living body, and the living body could not have produced a different dead body (given that it did leave one). Death necessarily turns a living thing into a single dead thing, across all possible worlds. My corpse could not be the corpse of Sydney Sweeney, say. The “corpse-of” relation is rigid across possible worlds. That seems evident enough, as evident as the necessity of origin. True, we can manufacture hard cases: if the corpse is headless in a possible world, is it the same corpse as the one still joined to a head in the actual world? What if it loses even more of its parts? In any case, there is a symmetry between origin and terminus, modally speaking: both are subject to de re necessities. We should therefore add necessity of termination to necessity of origin.[2]

[1] I am obviously drawing on Kripke’s discussion of origin in Naming and Necessity, which in turn draws on a 1962 discussion by Timothy Sprigge. Kripke is well aware of the need to restrict the thesis to the sperm and egg, though he doesn’t mention the further possibilities I consider here. I take this for granted in my paper “On the Necessity of Origin” (1976).

[2] We can imagine reversing origin and terminus such that a person begins as a corpse and ends as a pair of cells. First, the dead body exists and life is breathed into it (think Frankenstein’s monster); then, at the end, the person’s body gradually withers away so that only a pair of cells remain. We could accordingly say that the person necessarily came from a certain corpse (lifeless body) and necessarily ended in a certain pair of cells. The table might likewise begin in a pile of ashes and end as part of a living tree (this would be a rather miraculous possible world).

Share

Skateboarding

Skateboarding

I already had a skateboard, but it hadn’t ventured much beyond my living room. It seemed like asking for trouble (too small, too unstable). Then I saw someone using a longer type of skateboard at my local park (I was throwing discus and frisbee left-handed at the time). I went on Amazon and found that such boards are made and quite a bit longer than the average skateboard (about 44 inches). I ordered one. I tentatively tried it out in my driveway clad in appropriate armor (knee pads etc.). Then, yesterday evening—crepuscular time—I decided to make a real effort to make some progress. I went out on a local street, again protectively clad, and took the plunge. Yes, it’s pretty scary, steering is hard, and stopping feels impossible; but it is possible. I learned how to do it in about half an hour. Admittedly, I had some experience with board sports—paddleboarding, surfing, windsurfing, snowboarding, skim boarding—but it wasn’t as difficult as I thought it would be. Fun, too. So now, I’m a skateboarder, age 74.

Share

Morality of Life and Death

Morality of Life and Death

Morality is often presented as a list of commandments, imperatives, duties, requirements, rules. Among these we have the commandment not to kill—along with commandments not to steal, lie, betray, break promises, be ungrateful, etc. These are treated as much on a par; together they form a moral whole—a code, a system. In some presentations unity is imposed, as in utilitarianism and Kantianism. In particular, the prohibition against killing is regarded as just one duty among others (as in W.D. Ross’s catalogue of prima facie duties or obligations). Killing is not singled out for special treatment. But surely it stands out as special, not just one commandment among many. I want to suggest that morality should be bifurcated so as to record and respect the special character of the wrongness of killing. There is stealing and lying (etc.) on the one hand and killing on the other. These are distinct moral kinds, superficially similar but actually deeply different. We might speak of “normative dualism” as we speak of “substance dualism”. Let’s not lump everything together as if morality were a homogeneous domain. Some of morality is concerned with rules concerning the conduct of life, but some relates to the business of death. Some is about how to live with other people (also animals) and some is about the morality of ending life.

The most obvious point of difference is that killing is far more serious than other immoral acts—deserving of the greatest censure. Lying and stealing are wrong, but killing isn’t just wrong—it’s really seriously wrong. It sounds oddly understated to describe killing as the wrong thing to do. The word “wrong” is inadequate to express its moral badness (the word “bad” isn’t much better). Here we reach for words like “heinous”, “abominable”, “evil”, “wicked”—none of these apply to garden-variety cases of lying and stealing. Taking a life is in a class of its own. Life is “sacred”, we say, unlike property. We underrate its badness by classifying it along with other bad acts. The prohibition against killing is particularly strong, not easily overridden. Hence, the kind of severe punishment reserved for it alone.

Second, the other moral rules cluster around the notion of harm or unhappiness: lying and stealing make people suffer, cause unhappiness, reduce utility. They make the recipient worse off. But killing doesn’t make the victim unhappy—it makes the victim no more, not even capable of being unhappy. A broadly utilitarian account of the non-killing norms sounds reasonable, but we can’t explain the wrongness of killing by appeal to how the victim feels after being killed. The state of being dead is not an unhappy state. The killing itself may cause pain, but the result isn’t more future suffering. This makes killing a very special kind of wrong.

Third, it is not easy to say precisely what is bad about being dead, whereas it is easy to say what is bad about being stolen from or lied to. The badness of suffering is no mystery, but the badness of not being able to suffer (because dead) is perplexing. The sophistical murderer may contend that he has spared his victim future misery, and that is certainly true, since all human life has its quota of misery. It is common to say that the wrongness of killing consists in depriving the victim of future pleasures, and that is intelligible enough; but killing is much worse than, and quite different from, just preventing future pleasures—that could be achieved simply by moving the person to an unpleasant environment. Killing is really bad even if the victim’s life isn’t all that pleasurable. Taking a life is much worse than depriving a person of pleasure. But it is hard to say what this special badness amounts to—which is not to say that it amounts to nothing.

So, the prohibition against killing cannot be assimilated to the other prohibitions; it is sui generis. Killing is seriously bad, inexplicable in terms of utility, and somewhat mysterious as to the ground of its wrongness. I would also say that it is much more shocking than other misdeeds, even torture. It is nihilistic, extreme, inexcusable. Of course, there are contexts in which it is not wrong, such as self-defense (or other-defense), just as there are contexts in which stealing and lying are not wrong. But when killing is wrong, it is shockingly wrong. Slavery is no doubt very wrong, but genocide is shockingly wrong (think how feeble it sounds to describe genocide as “wrong”). It isn’t just one of those things one shouldn’t do; it is outside the range of normal human wrongdoing. We don’t say to our children, “Don’t tell lies, and while we’re at it don’t murder either!” That’s not something we feel we need to warn them against. We don’t say, “Don’t torment your sister, and don’t kill her either!”

One might reasonably insist that the injunction against killing is not a moral rule at all, not a piece of moral guidance or advice; it goes deeper than that. It is a self-evident moral truth recognized by every sane halfway decent person. It really doesn’t need a commandment to back it up. A natural response to “Thou shalt not kill!” is “Yeah, tell me something I don’t know”. You don’t need to be educated into that piece of moral knowledge, whereas the standard moral rules do require a bit of prodding and instruction. To describe the prohibition against murder as a “prima facie duty” sounds hopelessly inadequate and quaint, the result of trying to impose unity on a heterogeneous bunch of moral no-nos. You don’t owe it to people not to kill them, as you owe it to people not to lie to them, or not to steal from them, or not to be ungrateful for what they have done to benefit you.

I thus recommend that we have two lists of moral injunctions: one list contains all the standard injunctions, arranged alphabetically and printed in black ink; the other contains only the injunction against killing, written in italics and red ink. Then people will see that it is not just any old piece of moral sermonizing (perfectly justified as that may be) but a special moral principle deserving a position of its own. Metaethically, we should subscribe to moral dualism.[1]

[1] One wonders whether the traditional list reflects the fact that in the old days people didn’t really distinguish the wrongness of killing from other sorts of wrong act. Killing was far more commonplace and indiscriminate; it took centuries before we realized how bad it actually is. Now we see that it is a different kind of immoral act—it has a different “real essence”. But we persist with the outmoded list, as if killing were not much worse than telling the odd fib or stealing apples from an orchard (“scrumping”). As Ryle would say, it belongs in a different category.

Share

Pain, Consciousness, and Morality

Pain, Consciousness, and Morality

Consciousness (sentience) evolved at a certain time on planet Earth, many millions of years ago. It didn’t emerge all at once but piecemeal: a certain type of consciousness evolved first, with additions later. What was this type? We don’t know; we can only guess. What we do know is that, whatever it was, later developments bore the stamp of it—they grew from it, modified it, extended beyond it. The machinery that enabled it to emerge was coopted in later versions of the original manifestation. The original adaptation set the course for subsequent developments. It is not to be supposed that consciousness was required for all information processing operations—the organism could respond effectively to environmental stimuli without being aware of them. I doubt that consciousness came about via primitive versions of the five senses we know today. It is not even clear that the senses require consciousness to do their job. No, the first form of consciousness would be something that is necessarily conscious—also vital. It seems to me that the best candidate is pain perception: the first glimmers of consciousness took the form of sensations of pain directed to the immediate environment. Pain perception is clearly highly adaptive, given the dangers presented to the organism—it warns of life-threatening impingements. Organisms that can feel pain will out-compete organisms lacking such sensations. One might think it was only a matter of time before pain became a standard (but not universal) feature of life on Earth. Moreover, pain is not possible without consciousness—without there being something it is like to have it. There is no unconscious or preconscious pain; sentience is built into it. Once pain exists consciousness is off and running. So, we can imagine the first sentient beings as pain perceivers, and only pain perceivers. Organisms with it became aware of the world as painful. They lived in a painful world (unlike plants and bacteria). Objects became consciously categorized as painful (or not painful). The primal phenomenology is a painful phenomenology.

Given that consciousness first arrived in the form of pain, we would expect its later forms to reflect that fact. What is the likely course of evolution for this newfound capacity? Pain and touch go together, so we would expect consciousness to extend itself into touch generally. The sharp pointy object will cause pain and be perceived as sharp and pointy, consciously so. Sensations of shape and hardness will be added to sensations of pain caused by such objects. Both sorts of sensation will be experienced simultaneously, and joined together. An association will be formed such that object properties come to be imbued with pain productivity, potentially if not actually. Objects perceived as sharp and pointy will be regarded as inherently pain-inducing. The tactile world is a world of potential pain.[1] That is, indeed, the main significance of tactile perception—the detection of dangerous objects that cause pain. Pain is always a whisker away from touching objects; touch is risky, pain-prone. Just consider handling a kitchen knife! Touch is the careful sense—burns, scratches, grazes, cuts, stabs, collisions. Thus, touch is haunted by intimations of pain. That is its phenomenology, its intentionality. Tactile consciousness is steeped in pain consciousness—even kissing can turn painful! So, this sense is not far removed from the initial pain consciousness that (we are supposing) was the first manifestation of consciousness on Earth. Smell and taste are not very different: it is important for the organism to detect bad and dangerous food—hence nasty tastes and smells. This is not pain exactly, but it has the same kind of urgent avoidance that characterizes pain—you spit that stuff out reflexively. Tasting and smelling are also geared to the noxious and dangerous—that is their prime purpose (fine dining can come later). The negative is the original function and feeling. Tasting bad and hurting are the prime modes of the corresponding senses, because most vital to survival (gene transmission). What about the distance senses? Well, both vision and hearing can become loci of pain and discomfort if the stimulus is too strong, which it can easily be. But isn’t it also true that seen and heard objects are always assessed for their danger potential? The sight and sound of a predator, the falling rock, the rough pathway ahead, the thorn, the nettle, the fire. Vision and hearing inform us of a perilous world; they are not pleasant luxuries. Pain lurks in the background; it shapes the affordances. Vision, combined with touch, informs us of a dangerous world, only too ready to deal out quantities of pain. The case resembles feathers: originally evolved for purposes of thermal regulation, later extended into devices of flight, but still bearing the marks of their thermal origins. Visual and auditory consciousness stem originally from pain consciousness, according to our hypothesis, and they never lost their association with pain. Pain is their sine qua non. Pain is what enabled them to evolve. Perhaps they would never have evolved without it (qua conscious processes). Thought and rationality take a further step away from primitive pain, but they too bear its imprint—pain is written into them, albeit remotely (like limbs and fins). In the genetic book of the dead pain is a footnote in the chapters on thought and reason (ditto language). Evolution is an essentially conservative process, with earlier traits preserved in later developments. Let me put it with maximum bluntness: the mind is riddled with pain, or the idea of it. Consciousness exists in the shadow of pain. It is an outgrowth of pain. No doubt other ingredients were added in the fullness of time, but the evolutionary history is never completely abandoned (we are still fish—though long out of water). Our consciousness is a construction out of pain, as biological raw material. Attenuated, modified, reformed—but still pain-derivative. We are built to suffer, like all sentient beings. Suffering is our biological fate. Any study of consciousness, then, should be aware of this ancient history preserved in stone.[2] What it is like to be conscious is informed by what it is like to feel pain. Could we even say that all consciousness is really a modeof pain? Physically, we have bacteria distributed throughout the body, lurking in every cell (mitochondria); mentally, we have pain distributed throughout the mind, though modified greatly over evolutionary time. Even mathematical thoughts have pain lingering in them somewhere; certainly, they are made possible by the original appearance of pain consciousness (if our hypothesis is correct). The machinery and phenomenology of pain are the origin of everything mental. Pain is a mental universal (“pan-painism”).

Why do I mention morality in my title? The reason is simple: pain is also the origin and basis of all morality. How did morality evolve (i.e., our thoughts concerning right and wrong)? It came from the reality of pain (not so much pleasure): the prime moral directive is “Cause no pain!” It is obvious to any half-way intelligent being that pain is bad—always has been, always will be. So, it is wrong to cause it. Isn’t that the most fundamental of moral principles? Other ideas can be added to it, but it is never left completely behind: increase pleasure (minimize pain), don’t torture and steal (they hurt the victim), keep your promises (don’t disappoint people), be grateful (don’t make your benefactor regret helping you), be just (don’t cause unhappiness in people unfairly), etc. It’s all about suffering and the avoidance thereof. If there is anything else, it is secondary, not of the essence. Thus, there is no morality worthy of the name without the reality of pain (suffering, unhappiness); no real point to it, no urgency. The first moral thought on planet Earth was “It’s wrong to hurt people” (though “people” might be restricted to one’s own kin, or just oneself). Pain is the sine qua non of morality as we know it. Pain is necessarily conscious, so there can be no morality (of any consequence) without consciousness. Pain is the origin and focus of morality, as it is the origin and focus of mind (not the exclusive focus). Two great things therefore owe their existence to pain: consciousness and morality. Two good things exist only because of a bad thing (though pain has its good side as an indicator of danger). Some philosophers say death is the shaper of human life; others say it is free will; others say beauty: but pain has a good title to that status. It is formative, inescapable, and terrible. We can’t live with it, but we wouldn’t be here without it. Once felt, never forgotten. It made us conscious and it made us good.[3]

[1] We might define matter as what causes pain: not extension (Descartes) and not solidity (Locke), but painfulness (McGinn). The mind isn’t painful; you can’t collide with it. Your mental state never causes you to reel back in agony (“Ouch, that belief stung!”).

[2] If we could solve the problem of how pain arises from the brain, we would have pretty much solved the mind-body problem.

[3] If we ask what consciousness (or morality) would (or could) be like without pain, we run into difficulties. Our consciousness, and that of other animals, is so conditioned by the reality of pain that it is hard to imagine what consciousness without it would be like. Even vision would have to be very different, because it would no longer be surrounded by the apprehension of pain, actual or potential. Seeing a red cube, say, would have no relation to potential collisions—what it would feel like to be struck by such an object. The consciousness of a heavenly being would be very unlike our terrestrial consciousness, being bereft of any pain-inducing danger. The world would not be experienced as adversarial. Terrestrial consciousness, by contrast, is up to its neck in an adversarial world experienced primarily via pain or its possibility. For us, consciousness is as of a world of the permanent possibility of pain; removing this leaves something unreal and barely imaginable. It would be a consciousness devoid of fear. Likewise, in a world without pain (unhappiness, negative affect) morality would be scarcely recognizable, and of little account. It might consist of pallid injunctions to return your books to the library on time and the necessity not to open your mouth while eating.

Share