Economic Altruism

 

 

Economic Altruism

 

No less an authority than Pope Francis has this to say: “Feverish consumerism breaks the bonds of belonging. It causes us to focus on our self-preservation and makes us anxious”. Why consumerism should “break the bonds of belonging” (whatever quite that means) is not made clear: why should buying stuff interfere with desirable social relations? We often buy things together or as gifts for each other, and buying things alone for oneself is hardly a source of social breakdown. Also, why does consumerism (itself undefined) make us anxious and focused on self-preservation? We are already anxiously focused on self-preservation for obvious reasons (death, disease, war, etc.), and our purchases often ease these concerns by affording us security and protection (clothes, homes, electricity, food, etc.). Would we be less anxiously focused on self-preservation if we didn’t buy ourselves anything? Hardly. Perhaps the author intends all the argumentative weight to be carried by “feverishly”: it is consuming feverishlythat brings about these woes. But doing anything feverishly is liable to have untoward consequences—even giving money to charity if done feverishly (imagine a person feverishly working all the hours of the day in order to give money to charity, neglecting his family, working himself into an early grave, a monomaniacal loner). The pope obviously realizes that a little bit of consuming is not necessarily a bad thing, so he qualifies his condemnation by prefixing “feverishly”; but then the force of his criticism is blunted—what precisely is he criticizing? He doesn’t say so, but I assume his underlying complaint is that consuming is a selfish thing to do—and selfishness is a vice: all that spending money on yourself, treating yourself to this and that, buying yourself toys and fancy meals—instead of giving your money to charitable causes. Why not be more altruistic and give some money away instead of selfishly buying what makes you happy? Consumerism is thus the enemy of morality: it is pure selfishness. This has been a common complaint throughout the ages and is certainly part of traditional Christian teaching: we should be more altruistic and not so selfishly consumerist. Stop spending so much money on yourself (feverishly or calmly) and give more to others!    [1]           

            I think this moral position ignores an important aspect of consumption, even when what is consumed is entirely self-directed (which it often isn’t): namely that, in buying things for ourselves, we give money to others.Buying is also giving. We do the vendor a favor by buying from him, even when our aim is entirely egoistic. If I buy a new tennis racket in order to play better tennis, I make a donation to the seller of the racket—I provide him with an income. If I didn’t, he wouldn’t have one. If everyone stopped consuming, everyone would be out of work—no spending, no receiving, and hence no income. Someone might in fact spend with entirely altruistic aims: he doesn’t want anything for himself, but he consumes in order to provide others with an income. Of course, he could just hand the vendor the money and get nothing in return (unalloyed charity), but that has obvious disadvantages: people like to work and earn their money; we need functioning industries and other forms of work to live well; we would be depleting our own resources for nothing in return, which may lead to destitution and death. It’s better to make your transfers of cash to people who give you something in return: it’s better for everyone that way. This is not to say that you should never give to charity—you clearly should in certain circumstances—but it is to say that not doing so by consuming instead is not a purely self-benefiting act. It is altruism by egoism. That is the nature of purchase: you take from others by giving to them. There is nothing immoral about this arrangement, nothing culpably selfish (every time you eat you are acting “selfishly”). Self-preservation is not ipso facto morally bad (pacethe pope, apparently). You should pay a fair price, to be sure, but if you do you benefit the vendor—you make his life better. You are not unfairly depriving him of anything; you aren’t stealing. Consuming is perfectly moral; notconsuming is what is immoral. If you are a habitual miser, you decline to give your money to others for services rendered, thus reducing their income—an economy full of misers quickly tanks. Even strenuous (“feverish”) consuming is morally commendable, so long as you are handing over cash to other people; or at least it is not morally impermissible. Okay, don’t do it all the time, leaving no room for other worthwhile activities and interests, but there is nothing amiss with doing it regularly (compare other human activities that the church has seen fit to prohibit). It is a form of wealth distribution. It is not just selfishly hoarding up stuff for your own pleasure without regard for the welfare of others. There is no need for guilt as you make that big purchase—many people will benefit from it. Instead, think of all the good you are doing to complete strangers: thanks to you they have food on the table, happy children, a worthwhile life. Admittedly, we don’t want too much economic inequality in our society, or grotesque McMansions, or fleets of carbon-emitting sports cars: but that has nothing to do with consumerism as such. Spending is really just like charity, except that the recipient has to do something in return. If he can’t, then charity is appropriate; but if he can, there is nothing objectionable about getting something in return. Indeed, it is positively desirable from a moral point of view—you are actively helping people. This fosters social bonds; it doesn’t break them. People like being paid by you. Christianity has given consumerism a bad name by associating it with greed and anti-social behavior, but it is no such thing—not considered in itself.    [2] Charity can be a bad thing too if done thoughtlessly or from egoistic motives or without regard for consequences, but that doesn’t imply that charitable giving is somehow unethical (the vice of “giver-ism”). It is the same with the kind of giving that occurs in an economic transaction—capable of abuse but not inherently immoral. It is a bizarre form of puritanism to suppose that consuming is antithetical to morality—on the grounds that the consumer gets some pleasure out of it. It is not necessary to suffer in order to be a good person; self-deprivation is not the essence of the moral life, despite what the Catholic Church may have to say. True, the consumer is no rabid ascetic, but that is not a moral criticism. The wise consumer is a happy consumer, not least because of the altruism manifested in his acts. Remember that you are a person too and thus deserve moral consideration, from yourself as well as from others; it is not moral to treat yourself badly. So the consumer is not immoral simply because he treats himself well: he treats himself well by treating others well—by handing them money. He receives, but he also gives, necessarily so.

            And there is this not inconsiderable point: in charitable giving the recipient is in the donor’s debt, but not so in economic exchange. We always put people in an awkward position by giving to them—because then they owe us—but we can give without incurring the recipient’s indebtedness if we buy from someone. There is no burden of gratitude, no feeling that you must somehow reciprocate. This can fray relationships and break bonds—indeed, some people do it precisely in order to gain a moral edge over others. We can bypass all this by always receiving as we give. Everybody is happier that way. In an ideal society there would be no charitable giving (and so no moral indebtedness), but plenty of non-charitable giving—otherwise known as buying stuff.    [3] Perhaps we should re-label the consumer: she is actually a payer, a giver, and a producer (of other people’s wellbeing). Even a feverish one of those is not to be condemned (the sin of “producer-ism”).

 

Coli

    [1] This position ignores the fact that it is possible to consume for the sake of other people—you like to buy stuff and then give it away. So there is no necessary link between consumerism and selfishness. But let’s ignore this obvious point so that we can focus on a more interesting fault in the pope’s reasoning.

    [2] It is not to be confused with capitalism, for reasons too obvious to mention.

    [3] We should also reject the stereotype of the consumer as someone who accumulates manufactured goods beyond any real need (hundreds of shoes, dozens of cars, multiple homes). We can also consume music lessons, books, the works of local artists, the services of lawyers, gym memberships, and many other worthwhile goods and services. Many of the things we consume are indisputably good for the soul. Don’t Catholics consume things as part of their religion—such as the teachings of the pope (someone has to pay for his upkeep)? What about cathedrals? 

Share

Disease and Belief

 

 

Disease and Belief

 

Are there any diseases of the belief system? Apparently there are: they have names like schizophrenia and bipolar disorder. These diseases (OED: “a disorder of structure or function in a human, animal, or plant”) cause the sufferer to form false and irrational beliefs, sometimes whole belief systems we label “delusional” or “crazy”. These defective beliefs can cause harm to the believer and to others (think of paranoia). But are there any contagious diseases of the belief system? We can certainly imagine such a disease: there could be a species that contracts defective beliefs by transmission from one believer to another. Giving voice to certain beliefs could cause them to be formed in another mind by a kind of automatic transmission, no convincing justification necessary. It would be possible for belief “viruses” to be concocted in a laboratory and then intentionally sent out to infect the local population. So long as the population was receptive to invasion by these agents of belief formation we can imagine them spreading according to the standard epidemiological model. The beliefs spread meme-like across the population. We could think of the contagious belief as a “mind virus” (cf. computer virus). It might be that the beliefs involve wacky ideas about the origins of the universe or other people’s motivations or the secret life of cats. We can imagine these false beliefs doing a good deal of harm (there is a movement to put down all cats).  Preventative measures would be possible: don’t listen to or read any material with potentially dangerous content, keep away from others already suffering from the disease, and stay at home. Maybe a “vaccine” can be produced that immunizes people from infection: scientists inject a mild form of the disease into people so that their critical faculties become sensitized to this sort of invasion, thus reducing the chance of catching the disease in its most florid form. The principle is that once you’ve been in a cult you are not likely to join another one: you recognize the dangers and smartly walk away. Or some talk therapy might be indicated: simply tell people not to take those weird rumors seriously—point out how harmful they can be (some horrifying videos might be effective). For this species of believers, susceptible as they are, the infectious disease model would be entirely appropriate: they are prone to a disease that spreads in the usual way, and which can be managed by the standard procedures. The disease might even have a name: “beliefitis” or “assentosis” or “Wilkinson’s syndrome” (named after its discoverer). Doctors would be used to treating it, applying properly tested protocols, holding scientific conferences on the subject. For them it is a recognized branch of medicine.

            I set up this imaginary case in order (of course!) to throw light on the actual human situation. For it is evident that a situation very like this obtains in the human population: people are extraordinarily susceptible to disorders of the belief system. We need to form beliefs by transmission from others (“testimony”), this being an essential part of learning, but our defenses against bad beliefs are far from stellar. If we think of ourselves as possessing a cognitive immune system, then it is a notably porous one: all sorts of cognitive pathogens get through our defenses. Rationality (logic) acts as our immunological filter, but it is routinely bypassed and outmaneuvered. Those wily belief memes slip through its defenses with alarming ease. Some people have very weak cognitive immunity, lending their assent to almost anything they hear–the crazier the better, as far as they are concerned. Just consider all those ridiculous conspiracy theories that thrive on the Internet: they have no trouble infecting the brains of people with deficient cognitive immunity. Preventative measures are in principle possible—cover your eyes in the presence of such material, wear earplugs when necessary, don’t go near other people already infected—but it is virtually impossible to get people to follow these guidelines, no matter the prestige of the prescribing authority. Conspiracy theories about the motivations of the relevant experts can easily subvert their recommendations; and a national mandate is deemed politically unacceptable. So the belief virus keeps propagating, infecting, and transmitting. If the beliefs in question concern another disease, a purely physical one, then we have a pair of diseases running in parallel: a disease of the body and a disease of the mind. Both may be lethal if the penalty for erroneous belief is death. As the physical pathogen spreads and multiplies, so too does the mental pathogen: a psychological disease accompanies the physical disease (at least for people susceptible to the belief virus). In any case, these disorders of the belief system deserve to be thought of a disease-like, as much as physical disorders are. The offending beliefs are really a type of germ (from the Latin for “seed, sprout”): that is, they act as replicative agents of disorder—mental disorder in this case. They operate just like regular harmful germs from an epidemiological perspective. Not all beliefs are disease vectors, of course, just as not all germs are (some are perfectly harmless), but some are, well, virulent. And just as a schizophrenic strikes one as cognitively disordered, so someone in the grip of wacky conspiracy theories strikes one as mentally diseased (infected, invaded). It comes in degrees, but in extreme cases the beliefs achieve delusional stature: the sufferer is living in his own crazed world, cut of from reality. This is not at all uncommon—like the common cold. In fact, it is quite difficult to avoid getting infected—one’s defenses may not be able to ward off a concerted attack. Too much time spent with the wrong people can lead almost anyone to succumb to the disease. And there is no known vaccine with anything like the necessary efficacy (a logic course can only chip away at the problem). Once the virus has flooded the memo-sphere it can create havoc (the Internet is its prime vector). Hotspots will flare up, quarantining has little impact, and the disease rages on. People’s brains become breeding grounds for the virus, just itching to hop into the next brain. The belief virus goes viral.

            The only hope for a cure is early intervention: make sure children develop a robust cognitive immune system, capable of weeding out the diseased beliefs. This means an ability to criticize—rationally evaluate. Education is (partly) health education—strengthening the cognitive immune system. Show videos of people suffering from florid beliefitis (I name no names) and ask if the students want to end up like that. Warn people about the prevalence of the disease. Insist on protective measures. Above all, medicalize the problem—treat it as the disease that it is.  [1] Of course, it is necessary to have sound diagnostic methods, but that is not the insurmountable problem that some people imagine. You just need a qualified epistemologist to advise. Set up a panel of experts, get some funding, and take the problem seriously. We have to stamp out this scourge.

 

Colin McGinn

                 

  [1] This history of medicine is progressive medicalization, particularly with respect to the mind. It is only recently that mental disease was recognized as such. This is not a matter of trying to fit psychological disorders into a pre-existing medical framework in a reductive manner; rather, it is expanding medicine to include maladies of the mind. It is past time that we accepted that the belief system can be as diseased as any system of the body. After all, belief is a biological phenomenon and should be treated as such.

Share

Democracy and Autocracy

Democracy and Autocracy

I will have a go at a question bequeathed to us by Plato—the question of whether democracy has a tendency to devolve into autocracy. In democracy people have an equal say in political decisions—each person’s voice must be heard. This means that each person’s wishes are given equal weight. But there are inevitably conflicts between people’s wishes: some people want what others don’t want. Conflicts of interest arise. It follows that some people are sacrificing their own interests to the interests of others. For example, suppose a family is deciding where to have lunch: some of them want to have Italian, others Japanese, others Greek. Either a single member stipulates a given choice, or the matter is decided democratically; in the latter case (also in the former) some members of the family don’t get what they want. But they have no choice—they must follow the democratic decision. They would prefer it if they could rule autocratically, thus following their own wishes. As it is, several members are not happy with the outcome, especially if it happens on a regular basis (never having Japanese, say). Democracy entails a sacrifice of personal sovereignty—personal freedom. You don’t always get what you want.

            But suppose an autocrat comes along who promises to respect your wishes to the detriment of others, and suppose he has to power to bring this about. Perhaps he is able to impose the new order by force. Then you will always get what you want, though others will not get what they want. You have a reason to support this autocrat. You make a prudential calculation and put your weight behind this character. Thus autocracy replaces democracy: you no longer have to sacrifice yourself for the general interest by respecting the wishes of others. Democracy is inevitably a system in which many people feel discontented because other people get to decide their fate; but autocracy allows many people, perhaps a majority, to get exactly what they wish. This is why autocracies are always supported by one section of the population (the beneficiaries) but not by other sections. To put it bluntly, democracy conflicts with human greed.

            Does this mean that autocracy is stable? No, and for the obvious reason: many people are getting the short end of the stick. So autocracies are always rife with democratic rumblings: the disadvantaged want their voice heard, their wishes respected. Civil war is a likely outcome. So autocracy has a tendency to devolve into democracy. The result is the perpetual oscillation model of political history: from autocracy to democracy, from democracy to autocracy. For a very long time autocracy held sway in human groups, eventually to be replaced by democracy (in some cases at least); but democracy might be in turn be replaced by a resurgent autocracy, only to give way again to democracy. Neither system is stable; both tend to give way to the other. The reason is the inevitability of conflicts of interest, especially as regards the distribution of resources. People’s self-interested wishes don’t harmonize. Both democracy and autocracy struggle to deal with this fact, but in the end it is an insoluble problem. Thus there will never be political peace.Co

Share

Yes and No

Yes and No

The words “yes” and “no” are among the most familiar words of the English language, perpetually tripping off the tongue. But what do they mean—what kind of meaning do they have? They don’t have sense and reference: there is nothing they denote and there is no mode of presentation attached to them. They have no counterparts in established formal languages: no system of logic governs them. Theorists of language say nothing about them. They fall into no logical category: not singular terms, not predicates, not quantifiers, not connectives, not even brackets. No one talks about the logical form of yes-statements. Worse, they don’t appear to fall into any grammatical category: noun, verb, adjective, adverb, or preposition. Some linguists have classified them as sentences (“minor sentences”), because they get something linguistic done while standing alone; but even that must be wrong because they don’t compound as sentences do. You can’t negate them or conjoin them or insert them into a conditional.[1] You can’t say “Not no” in response to the question “Would you like to go bowling?” or affirm “Yes and snow is white”. Some languages do without them in replies to questions (Finnish, Welsh), preferring instead to reiterate the verb of the question (“Are you coming?” “We are coming”). They seem a bit like “true” and “false” in expressing affirmation and negation, but those words behave like normal words, combining happily with other words as parts of real sentences (you can say “That’s true” but not “That’s yes”). The OED offers this for “yes”: “Used to give an affirmative response”; for “no” we have “Used to give a negative response”. The dictionary doesn’t specify what these words mean in the usual definitional style but instead indicates their use. We are assumed to understand what an “affirmative response” is—some sort of assent or consent behavior (likewise for “no”). We don’t normally employ these words in our inner speech, because their function is to indicate something to others not to act as vehicles of thought; presumably they would not exist in a purely individual language not used for communication. One might hazard that they are “expressive”, but what emotion do they express? They are not like a whoop of joy or a groan of disappointment. They appear anomalous, sui generis, and mildly suspect—oddballs, rule-breakers. Yet they are with us always, among the most natural of utterances. What is going on with these two little words? 

            I would call “yes” an assentive and “no” a dissentive. They are not alone in this neglected category: in addition to “yes” we have “yeah”, “yup”, “yep”, and “yah” (and for “no” we have “nope” and “nah”); but we also have “sure”, “right”, “ok”, “no problem”, and “definitely”.[2] Moreover, we can dispense with the vocal organs altogether in registering our assent or dissent: we can nod or shake our head, smile or frown, or point our thumb up or down. There are lots of ways to show you feel favorably or unfavorably towards something. Couldn’t we just dispense with “yes” and “no” and get by with body language? These points all nudge us in the direction of the following conjecture: “yes” and “no” are not words at all (nor phrases or sentences). They simply don’t function like words: they have no grammar, no combinatorial power; they are not part of the computational system that other words participate in. They have a communicative use, to be sure, but that is not sufficient to make them part of language proper, defined as a certain formal structure—what Chomsky would call the human language faculty. Animal communication systems have their uses too, but they are not languages in this restricted sense—infinite recursive generative rule-governed grammatical systems. Strictly speaking, “yes” and “no” have no semantics and no syntax—they are not words in the proper sense. They obviously have their uses, but they are not semantic-syntactic particles (and hence neither nouns nor verbs nor adjectives nor adverbs nor prepositions). They signify but they don’t mean (except in the sense of speaker- meaning). Put differently, they have no conceptual interpretation and no representational function.

This suggestion may appear radical and counterintuitive, but actually there is considerable precedent for it: for speech is full of such “meaningless” elements. Consider “oh”, “ah”, “ooh”, “ha”, “hey”, “um”, “uh”, and “er”: these all occur frequently in speech but they are not words. Sometimes they occur in writing too, but only as a way to mimic speech: they look like words but they aren’t words. Indeed, they are not really elements of speech construed as the vocalization of words: they are speech helpers or auxiliaries or props. They are ersatz words. And they combine naturally with “yes” and “no” in informal speech: “Oh yes”, “Uh, no”. They can both also be repeated for emphasis: “yeah yeah yeah”, “Ha ha”.[3] This is like nodding vigorously or emphatically wagging one’s finger. We can modulate our response so as to indicate strong assent or firm dissent: the response can vary in magnitude (words proper don’t do that). Speaking loudly can also communicate state of mind, but nobody thinks that volume is a word. Linguists sometimes call these devices “paralinguistic”: “yes” and “no” evidently share several features with the paralinguistic. They are quasi words, borderline words, words by courtesy only.

            Here is a hypothesis: assent and dissent are important behaviors in a social species such as ourselves, predating the arrival of the human language faculty; the particles “yes” and “no” are just the latest way to get such attitudes across to conspecifics. We used to nod and wag, smile and grimace, but now we say “yes” and “no”: this is considered polite, civilized, well bred. We are communicating our attitudes of assent and dissent, consent or rejection, using the latest piece of human technology, viz. vocal speech. But we are harking back to more primitive times when we used other means to convey our attitudes. In animal mating behavior, assent and dissent clearly play an important role; the human “yes” and “no” are devices for getting these preferences across (among other devices). Presumably other species have their own methods for conveying assent and dissent, which are not verbalized; well, we are playing much the same game. Saying “yes” and “no” is just one way to indicate affirmative and negative response, but such responses are part of our pre-linguistic history; and the words (sic) carry this history within them. They represent the survival of an ancient signaling system within our newfangled capacity for articulate speech—along with assorted paralinguistic devices. What we loosely call “speech” is really an amalgam of evolutionary adaptations not a unified trait, and “yes” and “no” straddle these disparate systems. This is why we tolerate so much variation of pronunciation in these (putative) words: because we just need to convey assent or dissent not home in on a specific lexical item. If you mispronounce “house” you risk misunderstanding, but you can indicate assent in many verbal (and non-verbal) ways and not be criticized for it. This is also why the Beatles used “yeah” so often in their songs: it represents a more primordial state of mind than regular words. The “yeah” sound is joyful and optimistic, indicating harmony, consent, and agreement (no Beatles song has “Nah nah nah” in the chorus); it indicates a positive state of mind, extra-linguistically. Cavemen are often depicted as communicating by means of grunts: this has psychological truth to it in that non-linguistic communication goes to our more basic instincts. The grunt is universal and easily understood. “Yes” is the most beautiful word in the English language precisely because it isn’t really a word—it isn’t a component of that formal computational system that came into existence a mere 200,000 years ago.[4] Cooperation is the sine qua non of a social species, so expressions of affirmation are of the essence. Our word “yes” packs all of that into its short span (“no” is its unwelcome sidekick). It is a profoundly loaded word without really being a word at all (a combinatorial grammatical unit). We could do without it so long as we were adept at non-verbal communication (perhaps the Welsh and the Finns are). Say no to “yes”, but do so without saying “no”. “Yes” and “no” correspond to primitive acts, biologically based; the words are just recent tokens or tags.[5]

Colin McGinn         


[1] This shows that “yes” and “no” are not inter-definable using negation, unlike “true” and “false”: “yes” can’t mean “not no” and “no” can’t mean “not yes”—simply because these are not well formed. This is why we never use such locutions, though we can of course say, “I’m not saying yes” and “I’m not saying no”. These latter two sentences are curious in their own right, since they are using “yes” and “no” when they should be mentioning them. Any logically aware writer is uncomfortable with such sentences. Language is trying to squeeze “yes” and “no” into ordinary sentence frames. It’s like saying, “He said hello”, which is ambiguous at best.

[2] In the Geordie dialect we have “why aye” in which “why” does not have its usual meaning. Presumably it is the rhyme that makes this form attractive to speakers (“Are you going to see Sunderland play today?” “Why aye, man”).

[3] Shakespeare has King Lear utter the following “sentence” at the death of Cordelia: “O, O, O, O!” This is “language” reduced to the level of the grunt—but in context a sublime grunt. Compare “Yes!” uttered in jubilation.

[4] More accurately, that is when human speech entered human history, but the language faculty could have predated vocal speech by a long time, perhaps used for the purpose of enhancing thought. 

[5] Of course the same story could be told for “si” and oui” and the rest: all these phonetic units are surrogates for the act of affirmation.  

Share

Metaphysical Necessity

Metaphysical Necessity

We appear to have (at least) two concepts of necessity, usually known as epistemic necessity and metaphysical necessity. Epistemic necessity concerns what could turn out to be the case—what might be true “for all we know”; it correlates with certainty (the Cogito is an epistemic necessity). Metaphysical necessity concerns what could really be the case—how things could be in themselves; it has to do with objective essence. The word “metaphysical” isn’t doing much work here: we could as well speak of non-epistemic necessity, since metaphysical necessity is defined by contrast with epistemic necessity. We could add analytic and nomological necessity to the list: what is conceptually necessary and what is necessitated by natural law. Standard examples of metaphysical necessity belong to neither of these categories, being both synthetic and modally stronger than nomological necessity. What is striking is that we have no analysis of metaphysical necessity, as we have an analysis of epistemic necessity. We can say that epistemic necessity is certainty and epistemic possibility is uncertainty (or ignorance), or we can analyze the concept in terms of epistemic counterparts[1]; but we have nothing comparable to say about metaphysical necessity—here we have to take the concept as primitive. We have to take it as a brute fact that this table is necessarily made of wood or that a person necessarily has his or her actual parents. We have intuitions, but we have no account of these intuitions. This is quite puzzling: why should we have such intuitions, and where do they come from? Am I simply directly aware of the objective essence of things? Do I have a basic unanalyzable concept of non-epistemic metaphysical possibility? In the case of the other types of necessity we can see where they come from: from our state of knowledge, from concepts, or from the laws of nature. But metaphysical necessity appears ungrounded and unexplained: our concept of it appears primitive and inexplicable. This can fuel skepticism about the whole notion of metaphysical necessity (and possibility): is it perhaps just a trick of the imagination? What is its epistemology and what its conceptual underpinnings?

            There is one form of modality we have not mentioned: what I will call agent modality. This concerns what we (and other agents) can and cannot do. What we are free to do is what we can do and what it is possible for us to do. We are aware of this kind of necessity and possibility from our own case, and we recognize it in others. We are, in fact, painfully conscious of the limitations on our possible actions, yet also conscious of what lies within our power. We can make comparative judgments about this kind of thing. We have the idea of beings with superior agential powers—God, in the extreme case. Thus I am now aware of my possible courses of action today, and of my life decisions (I could have been a psychologist instead of a philosopher). But I have no power to change my height or my species or my parents, and I know it. There are agential necessities as well as agential possibilities. These are not epistemic: it isn’t that I might turn out to be a psychologist after all, or that I am certain of the identity of my parents. Rather, these are objective facts about my powers of action—about my abilities. So here is a category of objective non-epistemic necessity to set beside the usual category of metaphysical necessity. Of particular interest is the ability to change things: I can change my location, my clothes, my hairstyle, and even my occupation; but I can’t change my parents or my species or my identity. So there is a correspondence between agential and metaphysical modality, and an affinity of nature. Is this a coincidence?

            Consider Hesperus and Phosphorus: they (it) can change their location, but they can’t change their identity with each other. Planets have the ability to move, but they don’t have the ability to cease to be self-identical. Thus the concept of agential modality can be generalized to them: it isn’t a matter of free decision, to be sure, but it is a kind of power. Tables, too, can move, but they have no ability to change their material composition. Animals can walk around, choose a mate, and eat, but they can’t change their parental origin or species. Nor can other agents change the traits in question: it isn’t that we can change the identity of planets or the composition of tables or the origin of animals. No one can alter these things: they are agential necessities tout court. Not even God has the power to change these facts: he can’t make 3 even or water not H2O or Queen Elizabeth the daughter of Bertrand Russell and Gertrude Stein. God can do a lot—a lot is possible for God—but he can’t do just anything. Some actions are perfectly possible, within the agent’s powers, but some are impossible, even for the most powerful of agents. This is beginning to sound a lot like metaphysical modality, is it not? We might, then, venture a hypothesis: our concept of metaphysical necessity is an outgrowth of our concept of agential necessity (and similarly for possibility). We understand metaphysical modality on the model of agential modality—that’s where we get the idea from. We know what agential possibility is, originally from our own case, and then we generalize it to include metaphysical possibility. Accordingly, the examples of metaphysical necessity with which we are familiar are special cases of agential limitations, specifically limitations on God’s agency (or any conceivable agent). To be metaphysically necessary is to be such that no possible agent could change it. No possible agent could change this table from being made of wood to being made of ice—because that would make it a different table. You could replace each wood part with a similarly shaped chunk of ice, until the whole thing was changed to ice, but that would destroy the original wooden table, replacing it with a new ice table. Our intuition of necessity can thus be cashed out as an intuition of agential inalterability. That is what we are really thinking when we think that this table is necessarily made of ice: that no one could make it otherwise. This is not a conceptual reduction of the concept of metaphysical necessity (for one thing, it uses the concept of a possible agent); it is an attempt to link the unmoored concept of metaphysical necessity to something more familiar, more part of everyday life. It is a conceptual domestication—an elucidation or genealogy. It tells us from where the metaphysical concept derives. It tells us what family of concepts it belongs to, what its conceptual relatives are. It is true that the metaphysical concept transcends these practical origins, but it doesn’t entirely leave them behind: it builds on them, feeds off them, and exploits them. We might even offer that without them the concept of metaphysical necessity would not be available to us: we would draw a blank on questions of metaphysical modality if we had no prior notion of agential modality. The latter concept is a necessary precondition of possessing the former concept. It gives us the leg up we need. This is a case of conceptual leapfrogging or ladder climbing. Like many philosophical concepts, it takes its rise from something homelier.

            We can test the hypothesis by asking how changeability correlates with necessity: are the least changeable things the things with the most metaphysical necessity? Numbers are notoriously changeless, but they are also heavily endowed with necessity: everything about them (almost) is charged with necessity. If we ask what can be changed about the number 3, the answer is hardly anything. By contrast, the self admits of a great many changes—of place, activity, psychology, perhaps even physical composition—and it is also highly contingent. Almost everything about the self is changeable and contingent: you can even in principle put the self in another body by brain transfer, and selves are not necessarily tied to a given body. The more a thing can be changed by a suitable agent the more imbued with contingency it is. Organisms and physical objects are intermediate between numbers and selves: quite a bit can be changed, but quite a bit can’t be. You can easily change the location of a cat, but not its body type (if you put a cat’s brain into a dog’s body, you don’t get a cat—though you may get the cat’s self). Tables will accept changes of location and color, but they resist being converted into TVs or repaired by being recast in a different material.[2] This is all to say that our thoughts about what is metaphysically necessary or contingent are shot through with thoughts of what it is possible for agents to do. Two seemingly extraneous concepts thus intrude on these metaphysical matters: concepts of agents and actions. We are thinking of agents and we are thinking of their actions when we think about metaphysical modality. We aren’t just thinking of objects and their properties: we are thinking of what agents can and cannot do in relation to those objects and properties. When I think that I could have a had a different career I am thinking that I could have acted differently; when I think that a table could have been in a different place I am thinking of its powers of movement and of possible external causes of its movement (say, someone picking it up). When I think that I could not have had different parents I am thinking that, while I could have left my parents’ house earlier, it was not within my power to sever myself from them biologically. I am thinking, that is, of agency and action. My thought is not just about my possible properties, barely considered. Similarly, my modal thoughts about the table are not confined to the table and its properties; I am taking in other objects and other properties, specifically agents acting on the table. I am placing the table in a wider and richer conceptual context. So the concept of metaphysical necessity is not as bare and ungrounded as it may appear; it has its roots in a rather practical and useful set of concepts having to do with action. Epistemic necessity has its roots in concepts of knowledge, justification, and certainty; metaphysical necessity has its roots in concepts of agency, power, and action. Neither is self-standing and primitive.[3]

Colin McGinn


[1] This is Kripke’s notion of epistemic modality in Naming and Necessity (1972): roughly, a situation is epistemically possible if we could be in an epistemic situation qualitatively identical to the actual situation and yet the facts are otherwise. It is notable that Kripke says virtually nothing to articulate the concept of metaphysical necessity, beyond noting (correctly) that it has a strong intuitive content. My aim here is to remedy that lacuna—so I am seeking to save metaphysical necessity not bury it. I want it to seem less strange. Less exotic.

[2] We can allow for grades of metaphysical necessity, according to how easy it is to change a given property. It is very easy to change one’s location, but not so easy to change one’s career or color or personality, so one is morepossible than the other. And that is intuitively correct: it does seem more possible to move to a different place than to acquire a different personality—since one condition is easier to achieve than the other. The binary opposition of metaphysical necessity and metaphysical contingency is too simple, too black and white. Similarly, epistemic necessity also admits of grades: some things are less epistemically possible than others—we can be more certain that the sun will rise tomorrow than that the stock market will rise tomorrow. Both types of modality come in degrees.

[3] Here is another point: the logical analogy between modal concepts and deontic concepts is well known, and deontic concepts concern agents and actions. Obligation maps onto necessity and permissibility maps onto possibility. Locating the source of modal concepts in agential concepts therefore comports with the general tenor of the concepts in question; certainly deontic modalities are explicitly agential. 

Share

Possibility and Actuality

Possibility and Actuality

How do possibility and actuality differ? Is there anything intrinsically different about them? Some metaphysicians have supposed that the difference is entirely extrinsic: actual states of affairs and possible states of affairs are intrinsically the same, but the former constitute what we call the actual world and the latter constitute possible worlds. Thus we have the indexical theory of actuality: the actuality operator is equivalent to the demonstrative “this world”.[1] People in different worlds can employ this demonstrative, thereby treating their own world as actual. The worlds differ not at all in their intrinsic nature, with actuality just one instance of possibility; actuality is conferred from the outside, so to speak. Considered intrinsically, actual states of affairs and possible states of affairs are ontologically exactly alike. We could call this the modal uniformity thesis.

            But consider the following fact: actual states of affairs always carry with them other actual states of affairs, while this is not true for possible states of affairs. For example, a wall is actually painted beige but could have been painted blue: its being beige is accompanied by being in a certain location, being a certain height, being painted by certain painters; but the possible state of affairs of being painted blue has no such accompaniments—it exists as an isolated fact. The possibility of being blue carries with it no commitments about what other possibilities combine with it: it could be blue and at any number of locations, of varying heights, painted by different painters, etc. Being possibly blue doesn’t cluster with other possibilities. By contrast, the actual state of affairs of being beige comes with a fixed totality of other actual states of affairs—there are no degrees of freedom here. Actualities come in packages not singly—in groups not individually. Once you actualize a possibility it loses its independence and becomes attached to other actualized possibilities. Actualities necessarily arrive in bundles, whereas possibilities exist in isolation from each other. We can call this the holism of the actual. If possibilities are atomic, actualities are molecular. Holism of the mental says that mental states necessarily come in bundles not in isolated singularities; holism of the actual says that actualities come in bundles not in isolated singularities. But possibilities are not subject to this kind of holism—they can exist in splendid isolation. We can envisage a possibility existing all by itself, but we can’t envisage an actuality existing all by itself—it must be embedded in a larger whole. If we think of actualization as a function, we can say that it takes separate possible states of affairs as its argument and gives as its value a complex of actual states of affairs. For example, the possible state of affairs of being painted beige yields as the value of the actualization function a package of multiple actual states of affairs. You can’t actualize being beige without actualizing a whole lot of other stuff, but you can create a possible state of affairs without creating other (logically unrelated) states of affairs. Actually being beige requires determinate other actual properties, but possibly being beige requires no such other properties—it exists as an isolated atom in modal space. It is a question how widely the holism of the actual extends—might it extend to the whole of the actual world?—but it certainly extends well beyond the actual state of affairs we are considering. It includes rather remote properties such as the material composition of the beige wall (actual walls always a specific material composition). By contrast, the mere possibility of being beige is quite neutral with respect to such extrinsic properties: it is not embedded in a determinate matrix of other possibilities—possible beige walls don’t have a unique material composition. Possibilities are lone operators (or only travel with close family[2]) while actualities club together with other actualities (not necessarily logically related). Thus the modal uniformity thesis is false.   

            This means that there is an element of stipulation that characterizes merely possible worlds in contrast to the actual world. We don’t (and can’t) stipulate what actual facts coexist—that is a matter of how things actually are—but we do (and can) stipulate what possibilities combine with others. We say, “Consider a world in which pigs fly and horses talk”—and no one can stop us doing that—but we can’t say, “In the actual world pigs fly and horses talk”, because that is not actually the case. A possible world is made up of independent possibilities that are stipulated to coexist, but the actual world is made up of actualities that come in packages and which cannot be stipulated away. Possible worlds are like mental constructions in this respect, but the actual world is a given objective reality. When we announce that a world is a totality of states of affairs we must observe that the totalities are differently constituted, according as the world is actual or merely possible: in the actual world the states of affairs come glued together, so to speak, while in the possible worlds the constitutive states of affairs coexist by something more like fiat. We specify a possible world, but we discover the actual world. Actualization entails bundling, but mere possibility allows separation. So the principles of agglomeration are different for the actual world and possible worlds. Reality is more densely packed in the actual world than it is in possible worlds—that is, actualities are tightly bundled, while possibilities are free-floating (not counting inter-possibility logical entailments). Actuality is molecular; possibility is atomic. So actuality is intrinsically different from possibility: it adds something new to possibility. The ontological structure of the actual doesn’t simply recapitulate the ontological structure of the merely possible. Actuality is a different type of reality from possibility. It is not simply a matter of the indexical “this world”, or some other extrinsic view of actuality: the actual and the possible have a genuinely different mode of being. If we were to explore the universe of the merely possible, we would find a very different structure to reality there: the possibilities would be laid out in neat rows, carefully separated; they wouldn’t come in bundles, as they do in the actual world (a beige wall conjoined with a bunch of other properties such as a certain height, weight, and material composition). The world of possibilities is more like the world of stars, planets, and galaxies, structurally speaking—both are laid out in space without any interpenetration. Possibilities are like trees in a forest, flowers in a garden, children in a school. Actualities, on the other hand, are like units of technology or universities or branches in a tree—inherently holistic and cooperative. An actual state of affairs is always a sub-unit of something larger. A possible state of affairs, however, is self contained, free floating, not beholden to other states of affairs. Actualization alters the mode of existence of the possible by linking possibilities together into larger wholes: it is a process of assembly. It thus ends the self-isolation of the possible.[3]


[1] This is the view associated with the work of David Lewis.

[2] The family members are just the logically related possibilities: the possibility of being both red and square always travels with the possibility of being red.

[3] From an abstract metaphysical perspective, the holism of the actual ought to strike us as more remarkable than we are apt to suppose. Actual reality is a kind of creative synthesis that contrasts sharply with the piecemeal nature of merely possible reality. The latter functions as a kind of disordered raw material for the former, mere ingredients for a would-be cake. Actualization is really a creative (almost miraculous) act, generating solid chunks of reality from languishing and idle elements. Possibility is like the formless gas that preceded the formation of stars and galaxies; actualization is like gravity in converting this unpromising stuff into something shapely and worth attending to. It is the holism inherent in actualization that makes reality as a whole interesting. If all we had were possible states of affairs, never actualized, reality would be a pretty sad and boring place; it is the actualization function that creates the world as we know it. Actualization is the root of everything worthwhile; mere possibility (pre-actualized reality) is a sorry business. Holistic actualization is the animating force of reality—the equivalent of divine creation. Without it possibilities are just aimless shut-ins going nowhere and communicating with no one. Being merely possible is a lonely and pointless kind of life; being actual is social and cooperative. When God was wondering whether to make possibilities into actualities he was wondering whether to inject reality with meaning (in one sense of “meaning”).        

Share

Logic and Morality

Logic and Morality

Are there any affinities between logic and morality? The question may appear perverse: aren’t logic and morality at opposite ends of the spectrum? Isn’t logic dry and abstract while morality is human and practical? Isn’t one about proofs and the other about opinions? I think the affinities are real, however, and I propose to sketch them. Both concern guides to conduct: how we should behave, cognitively and practically. Logic gives rules to reason by; morality gives rules for action. These rules purport to be correct—to yield valid reasoning and right action. Thus logic and morality are both normative: they tell us what we ought to do. They are not descriptions of what we actually do but prescriptions about what should be done. These prescriptions can take a number of forms: on the one hand, logical laws, rules of inference, and avoidance of logical fallacies; on the other hand, moral laws, rules of conduct, and avoidance of immoral actions. Thus we have the three classical laws of logic (identity, non-contradiction, and excluded middle) and the utilitarian principle, or a list of basic duties (corresponding to consequentialism and deontology). We also have rules for making inferences: modus ponens and the Kantian principle of universalizability, say—as well as warnings against fallacious inference (don’t affirm the consequent, don’t try to infer an “ought” from an “is”). Neither subject is concerned to establish “matters of fact” about the natural world; both are concerned to improve reasoning, make us better people, keep us on the right track. It is good to reason validly and to do what is morally right. Thus logic and morality are procedural and prohibitive, rule-governed and critical. We apply them to facts in order to produce desirable results—true beliefs, right actions—but they are not a form of fact gathering analogous to physics or history.[1] They are practical not theoretical. They are active and engaged not laid-back and detached.

            It would be wrong to contrast the two in respect of formality. It is true that we have formal logic as taught in university logic courses, while morality can scarcely claim anything comparable (though there is deontic logic). But the logic and morality I am talking about are pre-formal—they are embedded in our natural competence at dealing with the world and are probably innately based. Logical reasoning existed before Aristotle tried to codify it, and morality pre-dates attempts at explicit refined statement. These are primitive forms of human competence, not dissimilar to language competence before grammarians came along. The distinction between logic and morality is relatively recent and may not have been salient to early humans. We know quite well what is meant by the “ethics of belief”, and we are not shy about pointing out fallacies in other people’s moral reasoning. Sound reasoning is sound reasoning—and it is what we should be aiming at. The distinction between logic and morality is not as sharp as we tend to think these days (I suspect it is less sharp in the ancient world than in the post-Christian word). Shoddy logical reasoning is deplored, as immoral action is. You should keep your promises and you should follow modus ponens; we can worry about fine points of logic versus morality later. If we suppose that animals possess rudimentary forms of logic and morality, are they really distinct modules in the animal mind? Logic and morality bleed into each other.

            Where does prudential reasoning fit? It is surely only logical (rational) to consider one’s own future wellbeing—so we might assign prudence to logic. But prudence is also behaving well to one’s future self, so that it falls within morality. Some moralists have even supposed (I think rightly) that prudence is a special case of morality—we have moral duties towards ourselves, as one sentient being among others. So prudential reasoning is both logical and moral—it has a foot in both camps. Or should we say that the idea of a dualism of camps is mistaken? Isn’t the line more blurred than contemporary culture recognizes? We think there is no morality in logic textbooks and that moral issues can’t be resolved by formal logic: but that is surely too narrow a view of both fields. Logic is up to its ears in normative notions, and morality is a domain of logical reasoning. If you are trying to resolve a complex moral issue, such as abortion or animal rights, you will find yourself invoking principles drawn from logic and from normative ethics—as we currently understand those fields. But from a ground level perspective these distinctions are blurred and irrelevant: you are just reasoning with whatever bears upon the topic. You are applying your logical and moral competence to a real world problem with a view to doing what is right. When you avoid deriving an “ought” from an “is” are you doing logic or morality? When you declare that all sentient beings have rights is that intended as a moral principle or a logical one? It functions as an abstract axiom used to draw conclusions—it is irrelevant whether it crops up in a standard logic text (they don’t even include modal logic). We shouldn’t have too narrow a view of logic, and we shouldn’t neglect the abstract character of much moral reasoning. I am inclined to say simply that moral reasoning just is logical reasoning—logical reasoning about questions of value.[2]   

            What about the point that logic is fixed, rigid and universal while morality is changeable, fluid and relative? Isn’t morality controversial and logic indisputable? But this is a naïve and tendentious way to think: logic has its controversies and morality is a lot more universal than many people suppose. I won’t rehearse the usual criticisms of moral relativism, subjectivism, emotivism, etc.; suffice it to say that morality is really a subject in objectively good standing. Also, logic is not free of internal strife: some find modal logic suspect, others favor intuitionistic logic, still others adopt a highly inclusive conception of logic (even accepting logical contradictions). Not much in human thought is not controversial in some way. Skeptics of the normative will object to “ought” in both logic and morality, but that simply underlines the affinity of the two areas. The essential point is that both logic and morality are normative systems designed to facilitate desirable outcomes; and both admit of a degree of formal articulation rooted in intuitive human faculties. We can all agree (if we are sane) that pain is bad and everything is identical to itself: why one should be assigned exclusively to something called “logic” and the other to something called “morality” is obscure. Both are self-evident propositions capable of functioning as axioms in a train of reasoning: the affinity is more obvious than the difference. You should keep your promises and not be cruel; you should existentially generalize and not affirm the consequent. Where exactly is the deep difference here? And this is before we get to non-standard logics like modal logic, epistemic logic, indexical logic, and deontic logic. They are all about reasoning and validity—but so is morality about reasoning and validity. If morality is about moral reasons, it is about moral reasoning: but then it is a logical enterprise. Logic is capacious enough to subsume morality, being the general theory of sound reasoning.

            Recognizing the affinity is helpful in resisting certain deforming conceptions of moral language and thought. Emotivism, prescriptivism, naturalism, psychologism, non-cognitivism, Platonism, contractualism: are any of these remotely plausible when applied to logic? People have toyed with such accounts of logic, but generally they have not found favor, so why should we seriously entertain them for morality? Moral discourse is like logical discourse—objective and normative—and should be treated as such. It is what it is and not some other thing. Thus its similarity to logic can work to legitimate it and avoid procrustean and reductive reinterpretations. Notably, the difficulty of finding justificatory foundations applies to both areas—some things just have to be taken for granted (pain just is bad, modus ponens just is correct). And in so far as logic should not be construed as a descriptive science of the platonic realm but as a normative system of rules of correct reasoning, so morality should not be thought of as describing the Good but as a normative system of rules of right action. If we cleave to the logical analogy, we can steer our way through dubious assimilations and deformations. Just to simplify matters, I recommend that we assert outright that morality is logic (part of it anyway): that way we have a neat antidote to various bad ideas about morality. This does not remove all philosophical questions about morality, but it raises the right kinds of questions. Logic, too, raises real philosophical questions, both metaphysical and epistemological, but these questions are the right ones to raise concerning morality. Morality, we might say, has a logical structure—and a logical role. It functions logically. Logicism is true of it (“moral logicism”). But I also think that logic needs to shed its antiseptic image and confess to its normative heart: it is really about how we should reason, what goodreasoning is. To be sure, we can treat logical systems formally, as mathematical objects; but the thrust of logic is prescriptive and critical—evaluative. It is concerned with a certain human value (viz. good reasoning), and therefore naturally belongs with morality. It is part of “value theory”. Accordingly, it belongs with such practices as praise and blame, conscience and shame, reward and punishment, respect and disrespect.[3] An illogical person is not a moral person; irrationality is a vice. Moral goodness and logical goodness are inseparable attributes, seamlessly connected; indeed, we shouldn’t even speak in such disparate terms. The distinction between logic and morality is an untenable dualism, an artificial separation.[4]   


[1] Thus the tradition has supposed both logic and morality to be known a priori, possibly to be synthetic a priori. This leads to reactionary attempts to demystify them: logic consists only of tautologies and morality is cognitively empty.

[2] In fact, I don’t believe there is a non-arbitrary definition of logic (or of logical constant) that separates off one kind of entailment from others, but I won’t go into this now.

[3] The awe and reverence Kant felt for the moral law has its counterpart in an idolatry of logic—as if logic has a godlike status (there should have been a Greek god dedicated to logic). This is logic as something sacred and sublime (to use Wittgenstein’s term).

[4] It may be useful for designing an academic curriculum but it doesn’t capture the real nature of our logical and moral being. Both are expressions of our underlying rationality.

Share

Is Neutral Monism Possible?

Is Neutral Monism Possible?

My aims here are limited, as befits the topic. I will make some remarks about the proper formulation of neutral monism with a view to demonstrating its obscurity, not to say infeasibility. The thought is that we should seek a level of description of reality that is neutral between the mental and the physical so as to make progress on the mind-body problem. Putting aside the (very real) question of how to define “mental” and “physical”, we can ask what is meant by “neutral” here: what does it mean to say that a type of description, or type of conceptualization, is neutral? The word usually means something like “non-committal” or “impartial”—not favoring one thing or party over another. But whatever it is that unites the mental and the physical could not be neutral in this sense; on the contrary, it must be fully committed—in both directions at once.[1] For it must express the essence of both the mental and the physical simultaneously: it must, in a word, reduce the mental and the physical to some third conceptual category. Neutral monism must be a committed monism—not at all neutral about the nature of the mental and the physical. It is easy to be neutral (non-committal) about the nature of the mental and the physical; it is much harder to provide a positive account of them. The doctrine known as neutral monism is really best described as all-encompassing monism or unifying monism. If you believed the mental and the physical could be unified using the concepts of causality or information, you would be a neutral monist in the intended sense, but you would certainly not be neutral about the nature of the mental and the physical. What is true is that the unifying monistic theory can’t simply use existing mental or physical concepts to capture the nature of the mental and the physical—that would deliver either idealism or materialism—but it does have to commit itself on the nature of both things. Russell’s brand of neutral monism did precisely that by identifying sense data as the neutral stuff: but of course it clearly favored the mental in its construction of reality as a whole, and is really a form of idealism. So what kind of description are we looking for that can unify the two domains without biasing the theory to one side or the other?

            We might think we have something ready to hand, viz. what is called topic-neutral language. Discussions of the mind-body problem regularly invoke that category of expression, which is thought to be shared by both the mental and the physical. It includes logical language, mathematical language, temporal language, and language for causal relations, abstract structure, and modality. The idea is that such language is not confined to mental or physical discourse but crops up univocally in both. All well and good, but it is a bold man that claims such language can provide what the neutral monist seeks: this looks like a conspicuously exiguous basis on which to build a grand theory that unifies the mental and the physical. The language isn’t biased towards one or other side of the divide, but it is hopelessly weak as a putative reduction of the mental or the physical. So the existence of topic-neutral language is no comfort to a would-be neutral monist; it doesn’t encourage the idea that we might be able to contrive the kind of unifying description abstractly indicated. So far, then, we have nothing with which to fill out the conceptual terrain gestured at by the neutral monist. We are left at a high level of abstraction with no indication of how we are to produce the kind of theory we are looking for. The theory appears to be more of a wan hope than a substantial research program. Its logical form is an existence statement without any verifying instance.

            Can we find any analogue of neutral monism elsewhere? Then at least we would know what we are talking about—we would have a model to go by. Here I think we reach the crux: for there is a model, hugely influential historically, that lies behind the neutral monist’s ambitions, and functions as its main inspiration. I mean atomism in the theory of the physical world. According to atomism, seemingly disparate elements of nature can be unified in a common vocabulary, which functions reductively. Thus the four traditional elements of earth, water, fire and air can all be explained by postulating homogeneous atoms that appear in different guises. The atoms are “neutral” in the sense that they appear in each element equally as common factors; the difference arises from their manner of aggregation—specifically, how tightly packed and mobile they are. They are dense and immobile in rocks and other earthy objects, also dense but more mobile in water and other liquids, quite rarified and volatile in fire, and highly dispersed and moveable in air. The unification works by finding a common constituent and then shifting the observed variety to relations between the constituents, specifically relations of proximity and motion. This is a kind of neutral monism of the four elements—and it works. It is actually true that the fourfold reality reduces to a single reality! The natural world turns out to be a lot more homogeneous than we supposed; the ancient atomists’ dream turns out to be sober fact. This provides a boost to the flagging spirits of the aspiring mental-physical unifier—maybe such an atomistic monism can supply the unification we seek. So we declare that mind and body must be composed of atoms of some sort that are shared between them; the variety or divergence we observe is but a superficial reflection of different relations between these underlying atoms. As the same physical atoms can occur in fire and water, so the same neutral atoms can occur in pain and salt. The atoms just combine differently, producing pain in one case and salt in another. The neutral monist has thus provided a model for how his conjectured theory might be true. He isn’t stuck just flapping his hands with a faraway look in his eye.

            The trouble is, of course, that this kind of atomism is completely implausible as a theory of the mental and the physical. In the case of traditional atomism we are dealing with four types of physical phenomenon, but that is precisely what is not true of the mental and the physical. The atoms that work to unify physical phenomena don’t work to unify the mental with the physical. We would need a completely new type of “neutral” atom—a hitherto undiscovered particle—in order to vindicate the type of atomism suggested by the neutral monist. But we have no evidence of any such particle, nor even a clear conception of what we are talking about. So the model limps—in fact, it never even gets moving. It operates rather as a mirage, like illusory water on the desert horizon. It makes us think that we have a real theory-sketch in hand, which we just need to fill out; but in reality it distracts us from the nature of the problem. It gives us false hope. We still don’t know what neutral monism would look like if it were true. Citing the atomist precedent is yet another instance of trying to understand the mental-physical divide by reference to something quite different, i.e. divisions within the physical domain.[2]

            Does this mean that neutral monism must be false? No: it means that we don’t know how it can be true. We have no clear conception of what its truth might be like. It can’t be like idealism or materialism because they are not neutral; it can’t be stated by recourse to topic-neutral vocabulary because that vocabulary lacks the requisite expressive power; and it can’t be modeled on the example of classical atomism because it is a problem of a completely different order. Anything we can cite as a possible format for the theory fails to do what is required of it, and nothing else suggest itself. All we can say is: if neutral monism is true, then it must take a form that transcends what we can currently understand. Nor is it like anything we can currently understand. Perhaps it will entail abandoning wholesale our current conceptions of the mental and the physical (a kind of “error theory”)—we are systematically deluded about the real nature of these categories. Maybe reality is fundamentally different from the way we naturally conceive it, and possesses a unity we cannot even dream of. Or perhaps the whole idea of unity is itself a mistake. In either case we have nothing substantial on which to base our hopes for the theory called “neutral monism”. It is a theory without precedent or precise formulation. That doesn’t make it false, but it does make it close to unintelligible.[3]

Colin McGinn         


[1] We might label it “Janus-faced monism”: it has to provide a unitary vision from two directions of gaze.

[2] Compare all those well-known analogies to empirically discovered identities in the physical sciences such as “Heat is molecular motion”. 

[3] By “unintelligible” I mean unintelligible to humans, not contradictory or otherwise necessarily false. It might be a true theory we can never grasp, even in outline. At present it amounts to not much more than the proclamation, “There must be something unitary out there otherwise the world would make no sense”. 

Share