An Argument for Nothing

 

 

An Argument for Nothing

 

The philosopher with no name maintains, fittingly, that nothing is real. In pre-Socratic style, he proclaims, “All is nothing”. He is a total eliminativist (going by the code name TE). We could call him a “nothingist”: everything is nothing, according to the nothingist.  [1] Not for him Being and Nothingness, but Nothing and Nothingness. TE contends that everything we talk and think about is fiction, pure make-believe; none of it is real. Science deals in fictions all the way down: its quantifiers range over only non-existent intentional objects (like Sherlock Holmes). Attributions of existence made by the unenlightened are simply false. TE notes that many more things don’t exist than do, and that we often make mistakes of existence, and that we have no clear idea what existence is anyway—so why not go the whole hog and abandon the idea altogether. Isn’t everything a bit fictional, a bit made up, even under our current conceptions, so why hang on to anything non-fictional? The OED defines “exists” as “have objective reality or being”, but many attributes of objects are projected or imagined or subjective in some way (color, beauty, solidity, etc.). Objects don’t objectively have these attributes. Maybe all of it—the manifest image, the phenomenal world—is so much projection and fancy, so that objective reality is not part of our actual worldview. Occam’s razor thus recommends ditching the idea of existence in favor of the fictional posit, the useful construct. The world is all appearance without reality. TE is a global anti-realist: all so-called reality is just so much unwarranted reification. We have heated disputes about what really exists—numbers, universals, values, colors, patterns, gods, and other universes; TE proposes that we simply abolish everything, cleanly and decisively. This, he points out, will solve many problems, since if nothing exists nothing is problematic. We won’t need to acknowledge mysterious realities, because nothing is real to start with. “Exists” is a strong word, a committal word, going beyond what we have any warrant for claiming—how can existence ever be verified?—so we do well to dispense with such assertions. What does it even mean to say that something exists? From what impression does the concept derive? Isn’t the ordinary concept essentially pragmatic, signifying something like “what we have reason to care about”? You needn’t worry about unicorns eating your grass, because unicorns don’t exist; but be watchful of tigers, because they assuredly do exist and can do you serious harm. “Exists” means attention-worthy; “doesn’t exist” means not worth bothering about. Why glorify this pragmatic concept as denoting a special kind of objective property and then rack our brains wondering what things really have it and what it comes to metaphysically? For TE the whole idea of existence, as the philosopher understands it, is a crock, a myth–so much philosophical nonsense. Away with existence! We can carry on talking without it, and still do science, and still make useful distinctions according to pragmatic criteria. Nothingism is a liberating doctrine, a way to let the fly out of the fly bottle; it allows us to view the world through a healthier and less discriminatory lens. We got rid of absolute space and time, we got rid of vital spirits, we got rid of gods and fairies, we even got rid of solid lumps of matter—now is the time to get rid of existent things altogether. As a bonus, we will at last have an answer to skepticism: we don’t need to worry that the external world might not exist, contrary to commonsense belief, since we know that it doesn’t exist—we have eliminated this idea from our conceptual scheme. We can still distinguish the serious from the unserious—“reality” from “fantasy”—using pragmatic criteria, but there is no deep question about whether what we believe in really exists. Tables and chairs don’t exist, because nothing does, so there is nothing whose existence skepticism can threaten to undermine. There is no reality whose nature we might not know, there being no reality at all. You can’t fail to know what isn’t there. All in all, the nothingist presents an attractive picture from a problem-solving point of view; we just need to get our minds around it and relax (he says). Admittedly, it takes some getting used to, but isn’t that true of most intellectual breakthroughs? Paradigm shifts and all that. The philosopher with no name has shown up in town with guns cocked, ready to drive out the undesirable elements. He has no time for the Existent Being Boys, those self-important intellectual troublemakers.

            No doubt TE cuts a striking figure (a high plains drifter  [2]), but we may wonder whether, despite his self-advertisements, he has any real argument for his startling position. Can he prove that there is nothing? Maybe it would be nice if nothing exists—it would take away our intellectual headaches—but can it be demonstrated that nothing exists? I can imagine a line of argument that might qualify, which I propose to outline. It might seem suspiciously clever, but when has that ever been an objection to a philosophical argument? It goes as follows. We start with a basic principle about knowledge and reality, namely that nothing unknowable exists. Things must be of such a nature that they can be known. If anything exists, it knowably exists–for example material objects must be knowable in order to exist. This leads by a familiar route to the idea that material objects must be somehow reducible to, or essentially involve, sense data (we leave open precisely what sense data are). When we say that a table exists we mean that certain sense data are obtainable—not that there is some noumenal entity whose existence we must blindly postulate. So let us accept that metaphysical position for the sake of argument: nothing yet follows about the non-existence of tables; on the contrary, they exist as robustly as sense data. But now we notice that sense data have an odd epistemology: while they are indubitably known from the first-person perspective, they are apparently unknown from the third-person perspective. And that perspective is as essential to them as the first-person perspective: sense data exist in the shared objective world, as well as being introspectively apparent to their subject. They have both first-person subjective reality and third-person objective reality (they have a basis in the brain and can cause things). But they are epistemologically problematic from the latter perspective, so we need to render them knowable from that perspective. To achieve that objective we reduce them to observable behavior. So far, so good: we have reduced material objects to sense data and sense data to behavior—nothing eliminative yet. We have simply respected our basic principle linking existence to knowledge (if there is no such link, why postulate existence at all?). True, we are being reductionist, but that begs no questions in favor of eliminativism: sense data exist and so does behavior. It is the next step that puts the cat among the pigeons: for we can’t help observing that behavior is an affair of the body, which is a material object. That means that we need an account of it that respects our principle, and reduction to sense data seems the only way to go (or something similar). So we reduce behavior to sense data as of behavior. But now of course we need to explain how these sense data are accessible from a third-person point of view, which we do by reducing them to suitable behavior; and thus the cycle begins again. An infinite regress of reductions ensues. By insisting on our principle–by no means question-begging—we are led to adopt reductionism about the material and the mental; but that leads us into an infinite regress as behavior gives way to sense data of behavior and these sense data in turn need their behavioral expression. We are thus faced with a dilemma: either we reject our principle or we give up on existence. The former option is unattractive, because it severs the connection between existence and knowledge; so we are left with the latter, which abandons the idea that material objects and sense data exist. Since they don’t exist, there is no need to link them to knowledge, so no need to offer reductions of them, so no regress of reduction. Reduction (or anything similar such as “criteria”) is simply not required under the assumption of non-existence. The choice, then, is between nothingness and mystery: for if the objects that allegedly exist are not knowable, they are mysterious—not objects of knowledge. The objects become noumenal in so far as they are declared unknowable. We can try to avoid this result by constitutively linking the objects with sense data (however construed), but that leads to regress once the existence of sense data is considered. In other words, a familiar predicament concerning reality and knowledge turns into an argument for the position that resolves the problem, viz. total eliminativism. TE thus has a colorable argument for the doctrine he recommends on broadly methodological grounds—he can prove what he says would be nice. The doctrine is not only advantageous from a problem-solving perspective; it is also capable of direct demonstration (given some reasonable assumptions). Only a type of mysterianism  [3] stands in the way, but nothingism will have no truck with that—it offers us a way of avoiding that epistemological disaster. If the choice is between total mystery and total non-existence, TE urges us to accept the latter. Only rigid adherence to the concept of existence stands in the way of intellectual liberation. We need to cut this concept loose.

            The nothingist applauds our standard anti-Meinongian incredulity, but wonders why we stop there. He thinks we throw the concept of existence around far too freely, and don’t take seriously the problems inherent in it. His recommendation is to dispense with Being altogether: there is no subsistence and no existence. Meinong is wrong, but so is Russell. As the Beatles sang in “Strawberry Fields”: “Nothing is real, and nothing to get hung up about”. We are apt to suppose that the king of France lacks existence and the queen of England has it—but why the distinction? Neither has the peculiar property of existence, though it is true that we have more to fear from the queen than from the king—and that is the only distinction worth drawing between the two. Talk of existence is just so much airy metaphysics, according to TE. Meinong thinks that everything mentionable has Being; we ordinary folks think (like Russell) that some mentionable things have Being and some don’t; TE thinks that nothing mentionable (or unmentionable) has Being–not really, not when you get right down to it. For TE we are closet Meinongians by another name.  [4]

 

  [1] We have the monist, the dualist, the pluralist–and the nothingist.

  [2] See the film of that name starring Clint Eastwood, himself a non-existent being.

  [3] Or as we might say “ignorancism”: in either case drastic epistemic limitation is posited. It’s either the unknowable thing-in-itself or nothing at all—two worlds or no world.

  [4] I hope it is clear that I am not myself intending to subscribe to nothingism here; I am just trying to give the view a run for its money. I favor the despised mysterian position, but I think the nothingist position is worth thinking about. It is not without argumentative resources. And it is fun to think about.

Share

Economic Altruism

 

 

Economic Altruism

 

No less an authority than Pope Francis has this to say: “Feverish consumerism breaks the bonds of belonging. It causes us to focus on our self-preservation and makes us anxious”. Why consumerism should “break the bonds of belonging” (whatever quite that means) is not made clear: why should buying stuff interfere with desirable social relations? We often buy things together or as gifts for each other, and buying things alone for oneself is hardly a source of social breakdown. Also, why does consumerism (itself undefined) make us anxious and focused on self-preservation? We are already anxiously focused on self-preservation for obvious reasons (death, disease, war, etc.), and our purchases often ease these concerns by affording us security and protection (clothes, homes, electricity, food, etc.). Would we be less anxiously focused on self-preservation if we didn’t buy ourselves anything? Hardly. Perhaps the author intends all the argumentative weight to be carried by “feverishly”: it is consuming feverishlythat brings about these woes. But doing anything feverishly is liable to have untoward consequences—even giving money to charity if done feverishly (imagine a person feverishly working all the hours of the day in order to give money to charity, neglecting his family, working himself into an early grave, a monomaniacal loner). The pope obviously realizes that a little bit of consuming is not necessarily a bad thing, so he qualifies his condemnation by prefixing “feverishly”; but then the force of his criticism is blunted—what precisely is he criticizing? He doesn’t say so, but I assume his underlying complaint is that consuming is a selfish thing to do—and selfishness is a vice: all that spending money on yourself, treating yourself to this and that, buying yourself toys and fancy meals—instead of giving your money to charitable causes. Why not be more altruistic and give some money away instead of selfishly buying what makes you happy? Consumerism is thus the enemy of morality: it is pure selfishness. This has been a common complaint throughout the ages and is certainly part of traditional Christian teaching: we should be more altruistic and not so selfishly consumerist. Stop spending so much money on yourself (feverishly or calmly) and give more to others!    [1]           

            I think this moral position ignores an important aspect of consumption, even when what is consumed is entirely self-directed (which it often isn’t): namely that, in buying things for ourselves, we give money to others.Buying is also giving. We do the vendor a favor by buying from him, even when our aim is entirely egoistic. If I buy a new tennis racket in order to play better tennis, I make a donation to the seller of the racket—I provide him with an income. If I didn’t, he wouldn’t have one. If everyone stopped consuming, everyone would be out of work—no spending, no receiving, and hence no income. Someone might in fact spend with entirely altruistic aims: he doesn’t want anything for himself, but he consumes in order to provide others with an income. Of course, he could just hand the vendor the money and get nothing in return (unalloyed charity), but that has obvious disadvantages: people like to work and earn their money; we need functioning industries and other forms of work to live well; we would be depleting our own resources for nothing in return, which may lead to destitution and death. It’s better to make your transfers of cash to people who give you something in return: it’s better for everyone that way. This is not to say that you should never give to charity—you clearly should in certain circumstances—but it is to say that not doing so by consuming instead is not a purely self-benefiting act. It is altruism by egoism. That is the nature of purchase: you take from others by giving to them. There is nothing immoral about this arrangement, nothing culpably selfish (every time you eat you are acting “selfishly”). Self-preservation is not ipso facto morally bad (pacethe pope, apparently). You should pay a fair price, to be sure, but if you do you benefit the vendor—you make his life better. You are not unfairly depriving him of anything; you aren’t stealing. Consuming is perfectly moral; notconsuming is what is immoral. If you are a habitual miser, you decline to give your money to others for services rendered, thus reducing their income—an economy full of misers quickly tanks. Even strenuous (“feverish”) consuming is morally commendable, so long as you are handing over cash to other people; or at least it is not morally impermissible. Okay, don’t do it all the time, leaving no room for other worthwhile activities and interests, but there is nothing amiss with doing it regularly (compare other human activities that the church has seen fit to prohibit). It is a form of wealth distribution. It is not just selfishly hoarding up stuff for your own pleasure without regard for the welfare of others. There is no need for guilt as you make that big purchase—many people will benefit from it. Instead, think of all the good you are doing to complete strangers: thanks to you they have food on the table, happy children, a worthwhile life. Admittedly, we don’t want too much economic inequality in our society, or grotesque McMansions, or fleets of carbon-emitting sports cars: but that has nothing to do with consumerism as such. Spending is really just like charity, except that the recipient has to do something in return. If he can’t, then charity is appropriate; but if he can, there is nothing objectionable about getting something in return. Indeed, it is positively desirable from a moral point of view—you are actively helping people. This fosters social bonds; it doesn’t break them. People like being paid by you. Christianity has given consumerism a bad name by associating it with greed and anti-social behavior, but it is no such thing—not considered in itself.    [2] Charity can be a bad thing too if done thoughtlessly or from egoistic motives or without regard for consequences, but that doesn’t imply that charitable giving is somehow unethical (the vice of “giver-ism”). It is the same with the kind of giving that occurs in an economic transaction—capable of abuse but not inherently immoral. It is a bizarre form of puritanism to suppose that consuming is antithetical to morality—on the grounds that the consumer gets some pleasure out of it. It is not necessary to suffer in order to be a good person; self-deprivation is not the essence of the moral life, despite what the Catholic Church may have to say. True, the consumer is no rabid ascetic, but that is not a moral criticism. The wise consumer is a happy consumer, not least because of the altruism manifested in his acts. Remember that you are a person too and thus deserve moral consideration, from yourself as well as from others; it is not moral to treat yourself badly. So the consumer is not immoral simply because he treats himself well: he treats himself well by treating others well—by handing them money. He receives, but he also gives, necessarily so.

            And there is this not inconsiderable point: in charitable giving the recipient is in the donor’s debt, but not so in economic exchange. We always put people in an awkward position by giving to them—because then they owe us—but we can give without incurring the recipient’s indebtedness if we buy from someone. There is no burden of gratitude, no feeling that you must somehow reciprocate. This can fray relationships and break bonds—indeed, some people do it precisely in order to gain a moral edge over others. We can bypass all this by always receiving as we give. Everybody is happier that way. In an ideal society there would be no charitable giving (and so no moral indebtedness), but plenty of non-charitable giving—otherwise known as buying stuff.    [3] Perhaps we should re-label the consumer: she is actually a payer, a giver, and a producer (of other people’s wellbeing). Even a feverish one of those is not to be condemned (the sin of “producer-ism”).

 

Coli

    [1] This position ignores the fact that it is possible to consume for the sake of other people—you like to buy stuff and then give it away. So there is no necessary link between consumerism and selfishness. But let’s ignore this obvious point so that we can focus on a more interesting fault in the pope’s reasoning.

    [2] It is not to be confused with capitalism, for reasons too obvious to mention.

    [3] We should also reject the stereotype of the consumer as someone who accumulates manufactured goods beyond any real need (hundreds of shoes, dozens of cars, multiple homes). We can also consume music lessons, books, the works of local artists, the services of lawyers, gym memberships, and many other worthwhile goods and services. Many of the things we consume are indisputably good for the soul. Don’t Catholics consume things as part of their religion—such as the teachings of the pope (someone has to pay for his upkeep)? What about cathedrals? 

Share

Disease and Belief

 

 

Disease and Belief

 

Are there any diseases of the belief system? Apparently there are: they have names like schizophrenia and bipolar disorder. These diseases (OED: “a disorder of structure or function in a human, animal, or plant”) cause the sufferer to form false and irrational beliefs, sometimes whole belief systems we label “delusional” or “crazy”. These defective beliefs can cause harm to the believer and to others (think of paranoia). But are there any contagious diseases of the belief system? We can certainly imagine such a disease: there could be a species that contracts defective beliefs by transmission from one believer to another. Giving voice to certain beliefs could cause them to be formed in another mind by a kind of automatic transmission, no convincing justification necessary. It would be possible for belief “viruses” to be concocted in a laboratory and then intentionally sent out to infect the local population. So long as the population was receptive to invasion by these agents of belief formation we can imagine them spreading according to the standard epidemiological model. The beliefs spread meme-like across the population. We could think of the contagious belief as a “mind virus” (cf. computer virus). It might be that the beliefs involve wacky ideas about the origins of the universe or other people’s motivations or the secret life of cats. We can imagine these false beliefs doing a good deal of harm (there is a movement to put down all cats).  Preventative measures would be possible: don’t listen to or read any material with potentially dangerous content, keep away from others already suffering from the disease, and stay at home. Maybe a “vaccine” can be produced that immunizes people from infection: scientists inject a mild form of the disease into people so that their critical faculties become sensitized to this sort of invasion, thus reducing the chance of catching the disease in its most florid form. The principle is that once you’ve been in a cult you are not likely to join another one: you recognize the dangers and smartly walk away. Or some talk therapy might be indicated: simply tell people not to take those weird rumors seriously—point out how harmful they can be (some horrifying videos might be effective). For this species of believers, susceptible as they are, the infectious disease model would be entirely appropriate: they are prone to a disease that spreads in the usual way, and which can be managed by the standard procedures. The disease might even have a name: “beliefitis” or “assentosis” or “Wilkinson’s syndrome” (named after its discoverer). Doctors would be used to treating it, applying properly tested protocols, holding scientific conferences on the subject. For them it is a recognized branch of medicine.

            I set up this imaginary case in order (of course!) to throw light on the actual human situation. For it is evident that a situation very like this obtains in the human population: people are extraordinarily susceptible to disorders of the belief system. We need to form beliefs by transmission from others (“testimony”), this being an essential part of learning, but our defenses against bad beliefs are far from stellar. If we think of ourselves as possessing a cognitive immune system, then it is a notably porous one: all sorts of cognitive pathogens get through our defenses. Rationality (logic) acts as our immunological filter, but it is routinely bypassed and outmaneuvered. Those wily belief memes slip through its defenses with alarming ease. Some people have very weak cognitive immunity, lending their assent to almost anything they hear–the crazier the better, as far as they are concerned. Just consider all those ridiculous conspiracy theories that thrive on the Internet: they have no trouble infecting the brains of people with deficient cognitive immunity. Preventative measures are in principle possible—cover your eyes in the presence of such material, wear earplugs when necessary, don’t go near other people already infected—but it is virtually impossible to get people to follow these guidelines, no matter the prestige of the prescribing authority. Conspiracy theories about the motivations of the relevant experts can easily subvert their recommendations; and a national mandate is deemed politically unacceptable. So the belief virus keeps propagating, infecting, and transmitting. If the beliefs in question concern another disease, a purely physical one, then we have a pair of diseases running in parallel: a disease of the body and a disease of the mind. Both may be lethal if the penalty for erroneous belief is death. As the physical pathogen spreads and multiplies, so too does the mental pathogen: a psychological disease accompanies the physical disease (at least for people susceptible to the belief virus). In any case, these disorders of the belief system deserve to be thought of a disease-like, as much as physical disorders are. The offending beliefs are really a type of germ (from the Latin for “seed, sprout”): that is, they act as replicative agents of disorder—mental disorder in this case. They operate just like regular harmful germs from an epidemiological perspective. Not all beliefs are disease vectors, of course, just as not all germs are (some are perfectly harmless), but some are, well, virulent. And just as a schizophrenic strikes one as cognitively disordered, so someone in the grip of wacky conspiracy theories strikes one as mentally diseased (infected, invaded). It comes in degrees, but in extreme cases the beliefs achieve delusional stature: the sufferer is living in his own crazed world, cut of from reality. This is not at all uncommon—like the common cold. In fact, it is quite difficult to avoid getting infected—one’s defenses may not be able to ward off a concerted attack. Too much time spent with the wrong people can lead almost anyone to succumb to the disease. And there is no known vaccine with anything like the necessary efficacy (a logic course can only chip away at the problem). Once the virus has flooded the memo-sphere it can create havoc (the Internet is its prime vector). Hotspots will flare up, quarantining has little impact, and the disease rages on. People’s brains become breeding grounds for the virus, just itching to hop into the next brain. The belief virus goes viral.

            The only hope for a cure is early intervention: make sure children develop a robust cognitive immune system, capable of weeding out the diseased beliefs. This means an ability to criticize—rationally evaluate. Education is (partly) health education—strengthening the cognitive immune system. Show videos of people suffering from florid beliefitis (I name no names) and ask if the students want to end up like that. Warn people about the prevalence of the disease. Insist on protective measures. Above all, medicalize the problem—treat it as the disease that it is.  [1] Of course, it is necessary to have sound diagnostic methods, but that is not the insurmountable problem that some people imagine. You just need a qualified epistemologist to advise. Set up a panel of experts, get some funding, and take the problem seriously. We have to stamp out this scourge.

 

Colin McGinn

                 

  [1] This history of medicine is progressive medicalization, particularly with respect to the mind. It is only recently that mental disease was recognized as such. This is not a matter of trying to fit psychological disorders into a pre-existing medical framework in a reductive manner; rather, it is expanding medicine to include maladies of the mind. It is past time that we accepted that the belief system can be as diseased as any system of the body. After all, belief is a biological phenomenon and should be treated as such.

Share

Democracy and Autocracy

Democracy and Autocracy

I will have a go at a question bequeathed to us by Plato—the question of whether democracy has a tendency to devolve into autocracy. In democracy people have an equal say in political decisions—each person’s voice must be heard. This means that each person’s wishes are given equal weight. But there are inevitably conflicts between people’s wishes: some people want what others don’t want. Conflicts of interest arise. It follows that some people are sacrificing their own interests to the interests of others. For example, suppose a family is deciding where to have lunch: some of them want to have Italian, others Japanese, others Greek. Either a single member stipulates a given choice, or the matter is decided democratically; in the latter case (also in the former) some members of the family don’t get what they want. But they have no choice—they must follow the democratic decision. They would prefer it if they could rule autocratically, thus following their own wishes. As it is, several members are not happy with the outcome, especially if it happens on a regular basis (never having Japanese, say). Democracy entails a sacrifice of personal sovereignty—personal freedom. You don’t always get what you want.

            But suppose an autocrat comes along who promises to respect your wishes to the detriment of others, and suppose he has to power to bring this about. Perhaps he is able to impose the new order by force. Then you will always get what you want, though others will not get what they want. You have a reason to support this autocrat. You make a prudential calculation and put your weight behind this character. Thus autocracy replaces democracy: you no longer have to sacrifice yourself for the general interest by respecting the wishes of others. Democracy is inevitably a system in which many people feel discontented because other people get to decide their fate; but autocracy allows many people, perhaps a majority, to get exactly what they wish. This is why autocracies are always supported by one section of the population (the beneficiaries) but not by other sections. To put it bluntly, democracy conflicts with human greed.

            Does this mean that autocracy is stable? No, and for the obvious reason: many people are getting the short end of the stick. So autocracies are always rife with democratic rumblings: the disadvantaged want their voice heard, their wishes respected. Civil war is a likely outcome. So autocracy has a tendency to devolve into democracy. The result is the perpetual oscillation model of political history: from autocracy to democracy, from democracy to autocracy. For a very long time autocracy held sway in human groups, eventually to be replaced by democracy (in some cases at least); but democracy might be in turn be replaced by a resurgent autocracy, only to give way again to democracy. Neither system is stable; both tend to give way to the other. The reason is the inevitability of conflicts of interest, especially as regards the distribution of resources. People’s self-interested wishes don’t harmonize. Both democracy and autocracy struggle to deal with this fact, but in the end it is an insoluble problem. Thus there will never be political peace.Co

Share

Yes and No

Yes and No

The words “yes” and “no” are among the most familiar words of the English language, perpetually tripping off the tongue. But what do they mean—what kind of meaning do they have? They don’t have sense and reference: there is nothing they denote and there is no mode of presentation attached to them. They have no counterparts in established formal languages: no system of logic governs them. Theorists of language say nothing about them. They fall into no logical category: not singular terms, not predicates, not quantifiers, not connectives, not even brackets. No one talks about the logical form of yes-statements. Worse, they don’t appear to fall into any grammatical category: noun, verb, adjective, adverb, or preposition. Some linguists have classified them as sentences (“minor sentences”), because they get something linguistic done while standing alone; but even that must be wrong because they don’t compound as sentences do. You can’t negate them or conjoin them or insert them into a conditional.[1] You can’t say “Not no” in response to the question “Would you like to go bowling?” or affirm “Yes and snow is white”. Some languages do without them in replies to questions (Finnish, Welsh), preferring instead to reiterate the verb of the question (“Are you coming?” “We are coming”). They seem a bit like “true” and “false” in expressing affirmation and negation, but those words behave like normal words, combining happily with other words as parts of real sentences (you can say “That’s true” but not “That’s yes”). The OED offers this for “yes”: “Used to give an affirmative response”; for “no” we have “Used to give a negative response”. The dictionary doesn’t specify what these words mean in the usual definitional style but instead indicates their use. We are assumed to understand what an “affirmative response” is—some sort of assent or consent behavior (likewise for “no”). We don’t normally employ these words in our inner speech, because their function is to indicate something to others not to act as vehicles of thought; presumably they would not exist in a purely individual language not used for communication. One might hazard that they are “expressive”, but what emotion do they express? They are not like a whoop of joy or a groan of disappointment. They appear anomalous, sui generis, and mildly suspect—oddballs, rule-breakers. Yet they are with us always, among the most natural of utterances. What is going on with these two little words? 

            I would call “yes” an assentive and “no” a dissentive. They are not alone in this neglected category: in addition to “yes” we have “yeah”, “yup”, “yep”, and “yah” (and for “no” we have “nope” and “nah”); but we also have “sure”, “right”, “ok”, “no problem”, and “definitely”.[2] Moreover, we can dispense with the vocal organs altogether in registering our assent or dissent: we can nod or shake our head, smile or frown, or point our thumb up or down. There are lots of ways to show you feel favorably or unfavorably towards something. Couldn’t we just dispense with “yes” and “no” and get by with body language? These points all nudge us in the direction of the following conjecture: “yes” and “no” are not words at all (nor phrases or sentences). They simply don’t function like words: they have no grammar, no combinatorial power; they are not part of the computational system that other words participate in. They have a communicative use, to be sure, but that is not sufficient to make them part of language proper, defined as a certain formal structure—what Chomsky would call the human language faculty. Animal communication systems have their uses too, but they are not languages in this restricted sense—infinite recursive generative rule-governed grammatical systems. Strictly speaking, “yes” and “no” have no semantics and no syntax—they are not words in the proper sense. They obviously have their uses, but they are not semantic-syntactic particles (and hence neither nouns nor verbs nor adjectives nor adverbs nor prepositions). They signify but they don’t mean (except in the sense of speaker- meaning). Put differently, they have no conceptual interpretation and no representational function.

This suggestion may appear radical and counterintuitive, but actually there is considerable precedent for it: for speech is full of such “meaningless” elements. Consider “oh”, “ah”, “ooh”, “ha”, “hey”, “um”, “uh”, and “er”: these all occur frequently in speech but they are not words. Sometimes they occur in writing too, but only as a way to mimic speech: they look like words but they aren’t words. Indeed, they are not really elements of speech construed as the vocalization of words: they are speech helpers or auxiliaries or props. They are ersatz words. And they combine naturally with “yes” and “no” in informal speech: “Oh yes”, “Uh, no”. They can both also be repeated for emphasis: “yeah yeah yeah”, “Ha ha”.[3] This is like nodding vigorously or emphatically wagging one’s finger. We can modulate our response so as to indicate strong assent or firm dissent: the response can vary in magnitude (words proper don’t do that). Speaking loudly can also communicate state of mind, but nobody thinks that volume is a word. Linguists sometimes call these devices “paralinguistic”: “yes” and “no” evidently share several features with the paralinguistic. They are quasi words, borderline words, words by courtesy only.

            Here is a hypothesis: assent and dissent are important behaviors in a social species such as ourselves, predating the arrival of the human language faculty; the particles “yes” and “no” are just the latest way to get such attitudes across to conspecifics. We used to nod and wag, smile and grimace, but now we say “yes” and “no”: this is considered polite, civilized, well bred. We are communicating our attitudes of assent and dissent, consent or rejection, using the latest piece of human technology, viz. vocal speech. But we are harking back to more primitive times when we used other means to convey our attitudes. In animal mating behavior, assent and dissent clearly play an important role; the human “yes” and “no” are devices for getting these preferences across (among other devices). Presumably other species have their own methods for conveying assent and dissent, which are not verbalized; well, we are playing much the same game. Saying “yes” and “no” is just one way to indicate affirmative and negative response, but such responses are part of our pre-linguistic history; and the words (sic) carry this history within them. They represent the survival of an ancient signaling system within our newfangled capacity for articulate speech—along with assorted paralinguistic devices. What we loosely call “speech” is really an amalgam of evolutionary adaptations not a unified trait, and “yes” and “no” straddle these disparate systems. This is why we tolerate so much variation of pronunciation in these (putative) words: because we just need to convey assent or dissent not home in on a specific lexical item. If you mispronounce “house” you risk misunderstanding, but you can indicate assent in many verbal (and non-verbal) ways and not be criticized for it. This is also why the Beatles used “yeah” so often in their songs: it represents a more primordial state of mind than regular words. The “yeah” sound is joyful and optimistic, indicating harmony, consent, and agreement (no Beatles song has “Nah nah nah” in the chorus); it indicates a positive state of mind, extra-linguistically. Cavemen are often depicted as communicating by means of grunts: this has psychological truth to it in that non-linguistic communication goes to our more basic instincts. The grunt is universal and easily understood. “Yes” is the most beautiful word in the English language precisely because it isn’t really a word—it isn’t a component of that formal computational system that came into existence a mere 200,000 years ago.[4] Cooperation is the sine qua non of a social species, so expressions of affirmation are of the essence. Our word “yes” packs all of that into its short span (“no” is its unwelcome sidekick). It is a profoundly loaded word without really being a word at all (a combinatorial grammatical unit). We could do without it so long as we were adept at non-verbal communication (perhaps the Welsh and the Finns are). Say no to “yes”, but do so without saying “no”. “Yes” and “no” correspond to primitive acts, biologically based; the words are just recent tokens or tags.[5]

Colin McGinn         


[1] This shows that “yes” and “no” are not inter-definable using negation, unlike “true” and “false”: “yes” can’t mean “not no” and “no” can’t mean “not yes”—simply because these are not well formed. This is why we never use such locutions, though we can of course say, “I’m not saying yes” and “I’m not saying no”. These latter two sentences are curious in their own right, since they are using “yes” and “no” when they should be mentioning them. Any logically aware writer is uncomfortable with such sentences. Language is trying to squeeze “yes” and “no” into ordinary sentence frames. It’s like saying, “He said hello”, which is ambiguous at best.

[2] In the Geordie dialect we have “why aye” in which “why” does not have its usual meaning. Presumably it is the rhyme that makes this form attractive to speakers (“Are you going to see Sunderland play today?” “Why aye, man”).

[3] Shakespeare has King Lear utter the following “sentence” at the death of Cordelia: “O, O, O, O!” This is “language” reduced to the level of the grunt—but in context a sublime grunt. Compare “Yes!” uttered in jubilation.

[4] More accurately, that is when human speech entered human history, but the language faculty could have predated vocal speech by a long time, perhaps used for the purpose of enhancing thought. 

[5] Of course the same story could be told for “si” and oui” and the rest: all these phonetic units are surrogates for the act of affirmation.  

Share

Metaphysical Necessity

Metaphysical Necessity

We appear to have (at least) two concepts of necessity, usually known as epistemic necessity and metaphysical necessity. Epistemic necessity concerns what could turn out to be the case—what might be true “for all we know”; it correlates with certainty (the Cogito is an epistemic necessity). Metaphysical necessity concerns what could really be the case—how things could be in themselves; it has to do with objective essence. The word “metaphysical” isn’t doing much work here: we could as well speak of non-epistemic necessity, since metaphysical necessity is defined by contrast with epistemic necessity. We could add analytic and nomological necessity to the list: what is conceptually necessary and what is necessitated by natural law. Standard examples of metaphysical necessity belong to neither of these categories, being both synthetic and modally stronger than nomological necessity. What is striking is that we have no analysis of metaphysical necessity, as we have an analysis of epistemic necessity. We can say that epistemic necessity is certainty and epistemic possibility is uncertainty (or ignorance), or we can analyze the concept in terms of epistemic counterparts[1]; but we have nothing comparable to say about metaphysical necessity—here we have to take the concept as primitive. We have to take it as a brute fact that this table is necessarily made of wood or that a person necessarily has his or her actual parents. We have intuitions, but we have no account of these intuitions. This is quite puzzling: why should we have such intuitions, and where do they come from? Am I simply directly aware of the objective essence of things? Do I have a basic unanalyzable concept of non-epistemic metaphysical possibility? In the case of the other types of necessity we can see where they come from: from our state of knowledge, from concepts, or from the laws of nature. But metaphysical necessity appears ungrounded and unexplained: our concept of it appears primitive and inexplicable. This can fuel skepticism about the whole notion of metaphysical necessity (and possibility): is it perhaps just a trick of the imagination? What is its epistemology and what its conceptual underpinnings?

            There is one form of modality we have not mentioned: what I will call agent modality. This concerns what we (and other agents) can and cannot do. What we are free to do is what we can do and what it is possible for us to do. We are aware of this kind of necessity and possibility from our own case, and we recognize it in others. We are, in fact, painfully conscious of the limitations on our possible actions, yet also conscious of what lies within our power. We can make comparative judgments about this kind of thing. We have the idea of beings with superior agential powers—God, in the extreme case. Thus I am now aware of my possible courses of action today, and of my life decisions (I could have been a psychologist instead of a philosopher). But I have no power to change my height or my species or my parents, and I know it. There are agential necessities as well as agential possibilities. These are not epistemic: it isn’t that I might turn out to be a psychologist after all, or that I am certain of the identity of my parents. Rather, these are objective facts about my powers of action—about my abilities. So here is a category of objective non-epistemic necessity to set beside the usual category of metaphysical necessity. Of particular interest is the ability to change things: I can change my location, my clothes, my hairstyle, and even my occupation; but I can’t change my parents or my species or my identity. So there is a correspondence between agential and metaphysical modality, and an affinity of nature. Is this a coincidence?

            Consider Hesperus and Phosphorus: they (it) can change their location, but they can’t change their identity with each other. Planets have the ability to move, but they don’t have the ability to cease to be self-identical. Thus the concept of agential modality can be generalized to them: it isn’t a matter of free decision, to be sure, but it is a kind of power. Tables, too, can move, but they have no ability to change their material composition. Animals can walk around, choose a mate, and eat, but they can’t change their parental origin or species. Nor can other agents change the traits in question: it isn’t that we can change the identity of planets or the composition of tables or the origin of animals. No one can alter these things: they are agential necessities tout court. Not even God has the power to change these facts: he can’t make 3 even or water not H2O or Queen Elizabeth the daughter of Bertrand Russell and Gertrude Stein. God can do a lot—a lot is possible for God—but he can’t do just anything. Some actions are perfectly possible, within the agent’s powers, but some are impossible, even for the most powerful of agents. This is beginning to sound a lot like metaphysical modality, is it not? We might, then, venture a hypothesis: our concept of metaphysical necessity is an outgrowth of our concept of agential necessity (and similarly for possibility). We understand metaphysical modality on the model of agential modality—that’s where we get the idea from. We know what agential possibility is, originally from our own case, and then we generalize it to include metaphysical possibility. Accordingly, the examples of metaphysical necessity with which we are familiar are special cases of agential limitations, specifically limitations on God’s agency (or any conceivable agent). To be metaphysically necessary is to be such that no possible agent could change it. No possible agent could change this table from being made of wood to being made of ice—because that would make it a different table. You could replace each wood part with a similarly shaped chunk of ice, until the whole thing was changed to ice, but that would destroy the original wooden table, replacing it with a new ice table. Our intuition of necessity can thus be cashed out as an intuition of agential inalterability. That is what we are really thinking when we think that this table is necessarily made of ice: that no one could make it otherwise. This is not a conceptual reduction of the concept of metaphysical necessity (for one thing, it uses the concept of a possible agent); it is an attempt to link the unmoored concept of metaphysical necessity to something more familiar, more part of everyday life. It is a conceptual domestication—an elucidation or genealogy. It tells us from where the metaphysical concept derives. It tells us what family of concepts it belongs to, what its conceptual relatives are. It is true that the metaphysical concept transcends these practical origins, but it doesn’t entirely leave them behind: it builds on them, feeds off them, and exploits them. We might even offer that without them the concept of metaphysical necessity would not be available to us: we would draw a blank on questions of metaphysical modality if we had no prior notion of agential modality. The latter concept is a necessary precondition of possessing the former concept. It gives us the leg up we need. This is a case of conceptual leapfrogging or ladder climbing. Like many philosophical concepts, it takes its rise from something homelier.

            We can test the hypothesis by asking how changeability correlates with necessity: are the least changeable things the things with the most metaphysical necessity? Numbers are notoriously changeless, but they are also heavily endowed with necessity: everything about them (almost) is charged with necessity. If we ask what can be changed about the number 3, the answer is hardly anything. By contrast, the self admits of a great many changes—of place, activity, psychology, perhaps even physical composition—and it is also highly contingent. Almost everything about the self is changeable and contingent: you can even in principle put the self in another body by brain transfer, and selves are not necessarily tied to a given body. The more a thing can be changed by a suitable agent the more imbued with contingency it is. Organisms and physical objects are intermediate between numbers and selves: quite a bit can be changed, but quite a bit can’t be. You can easily change the location of a cat, but not its body type (if you put a cat’s brain into a dog’s body, you don’t get a cat—though you may get the cat’s self). Tables will accept changes of location and color, but they resist being converted into TVs or repaired by being recast in a different material.[2] This is all to say that our thoughts about what is metaphysically necessary or contingent are shot through with thoughts of what it is possible for agents to do. Two seemingly extraneous concepts thus intrude on these metaphysical matters: concepts of agents and actions. We are thinking of agents and we are thinking of their actions when we think about metaphysical modality. We aren’t just thinking of objects and their properties: we are thinking of what agents can and cannot do in relation to those objects and properties. When I think that I could have a had a different career I am thinking that I could have acted differently; when I think that a table could have been in a different place I am thinking of its powers of movement and of possible external causes of its movement (say, someone picking it up). When I think that I could not have had different parents I am thinking that, while I could have left my parents’ house earlier, it was not within my power to sever myself from them biologically. I am thinking, that is, of agency and action. My thought is not just about my possible properties, barely considered. Similarly, my modal thoughts about the table are not confined to the table and its properties; I am taking in other objects and other properties, specifically agents acting on the table. I am placing the table in a wider and richer conceptual context. So the concept of metaphysical necessity is not as bare and ungrounded as it may appear; it has its roots in a rather practical and useful set of concepts having to do with action. Epistemic necessity has its roots in concepts of knowledge, justification, and certainty; metaphysical necessity has its roots in concepts of agency, power, and action. Neither is self-standing and primitive.[3]

Colin McGinn


[1] This is Kripke’s notion of epistemic modality in Naming and Necessity (1972): roughly, a situation is epistemically possible if we could be in an epistemic situation qualitatively identical to the actual situation and yet the facts are otherwise. It is notable that Kripke says virtually nothing to articulate the concept of metaphysical necessity, beyond noting (correctly) that it has a strong intuitive content. My aim here is to remedy that lacuna—so I am seeking to save metaphysical necessity not bury it. I want it to seem less strange. Less exotic.

[2] We can allow for grades of metaphysical necessity, according to how easy it is to change a given property. It is very easy to change one’s location, but not so easy to change one’s career or color or personality, so one is morepossible than the other. And that is intuitively correct: it does seem more possible to move to a different place than to acquire a different personality—since one condition is easier to achieve than the other. The binary opposition of metaphysical necessity and metaphysical contingency is too simple, too black and white. Similarly, epistemic necessity also admits of grades: some things are less epistemically possible than others—we can be more certain that the sun will rise tomorrow than that the stock market will rise tomorrow. Both types of modality come in degrees.

[3] Here is another point: the logical analogy between modal concepts and deontic concepts is well known, and deontic concepts concern agents and actions. Obligation maps onto necessity and permissibility maps onto possibility. Locating the source of modal concepts in agential concepts therefore comports with the general tenor of the concepts in question; certainly deontic modalities are explicitly agential. 

Share

Possibility and Actuality

Possibility and Actuality

How do possibility and actuality differ? Is there anything intrinsically different about them? Some metaphysicians have supposed that the difference is entirely extrinsic: actual states of affairs and possible states of affairs are intrinsically the same, but the former constitute what we call the actual world and the latter constitute possible worlds. Thus we have the indexical theory of actuality: the actuality operator is equivalent to the demonstrative “this world”.[1] People in different worlds can employ this demonstrative, thereby treating their own world as actual. The worlds differ not at all in their intrinsic nature, with actuality just one instance of possibility; actuality is conferred from the outside, so to speak. Considered intrinsically, actual states of affairs and possible states of affairs are ontologically exactly alike. We could call this the modal uniformity thesis.

            But consider the following fact: actual states of affairs always carry with them other actual states of affairs, while this is not true for possible states of affairs. For example, a wall is actually painted beige but could have been painted blue: its being beige is accompanied by being in a certain location, being a certain height, being painted by certain painters; but the possible state of affairs of being painted blue has no such accompaniments—it exists as an isolated fact. The possibility of being blue carries with it no commitments about what other possibilities combine with it: it could be blue and at any number of locations, of varying heights, painted by different painters, etc. Being possibly blue doesn’t cluster with other possibilities. By contrast, the actual state of affairs of being beige comes with a fixed totality of other actual states of affairs—there are no degrees of freedom here. Actualities come in packages not singly—in groups not individually. Once you actualize a possibility it loses its independence and becomes attached to other actualized possibilities. Actualities necessarily arrive in bundles, whereas possibilities exist in isolation from each other. We can call this the holism of the actual. If possibilities are atomic, actualities are molecular. Holism of the mental says that mental states necessarily come in bundles not in isolated singularities; holism of the actual says that actualities come in bundles not in isolated singularities. But possibilities are not subject to this kind of holism—they can exist in splendid isolation. We can envisage a possibility existing all by itself, but we can’t envisage an actuality existing all by itself—it must be embedded in a larger whole. If we think of actualization as a function, we can say that it takes separate possible states of affairs as its argument and gives as its value a complex of actual states of affairs. For example, the possible state of affairs of being painted beige yields as the value of the actualization function a package of multiple actual states of affairs. You can’t actualize being beige without actualizing a whole lot of other stuff, but you can create a possible state of affairs without creating other (logically unrelated) states of affairs. Actually being beige requires determinate other actual properties, but possibly being beige requires no such other properties—it exists as an isolated atom in modal space. It is a question how widely the holism of the actual extends—might it extend to the whole of the actual world?—but it certainly extends well beyond the actual state of affairs we are considering. It includes rather remote properties such as the material composition of the beige wall (actual walls always a specific material composition). By contrast, the mere possibility of being beige is quite neutral with respect to such extrinsic properties: it is not embedded in a determinate matrix of other possibilities—possible beige walls don’t have a unique material composition. Possibilities are lone operators (or only travel with close family[2]) while actualities club together with other actualities (not necessarily logically related). Thus the modal uniformity thesis is false.   

            This means that there is an element of stipulation that characterizes merely possible worlds in contrast to the actual world. We don’t (and can’t) stipulate what actual facts coexist—that is a matter of how things actually are—but we do (and can) stipulate what possibilities combine with others. We say, “Consider a world in which pigs fly and horses talk”—and no one can stop us doing that—but we can’t say, “In the actual world pigs fly and horses talk”, because that is not actually the case. A possible world is made up of independent possibilities that are stipulated to coexist, but the actual world is made up of actualities that come in packages and which cannot be stipulated away. Possible worlds are like mental constructions in this respect, but the actual world is a given objective reality. When we announce that a world is a totality of states of affairs we must observe that the totalities are differently constituted, according as the world is actual or merely possible: in the actual world the states of affairs come glued together, so to speak, while in the possible worlds the constitutive states of affairs coexist by something more like fiat. We specify a possible world, but we discover the actual world. Actualization entails bundling, but mere possibility allows separation. So the principles of agglomeration are different for the actual world and possible worlds. Reality is more densely packed in the actual world than it is in possible worlds—that is, actualities are tightly bundled, while possibilities are free-floating (not counting inter-possibility logical entailments). Actuality is molecular; possibility is atomic. So actuality is intrinsically different from possibility: it adds something new to possibility. The ontological structure of the actual doesn’t simply recapitulate the ontological structure of the merely possible. Actuality is a different type of reality from possibility. It is not simply a matter of the indexical “this world”, or some other extrinsic view of actuality: the actual and the possible have a genuinely different mode of being. If we were to explore the universe of the merely possible, we would find a very different structure to reality there: the possibilities would be laid out in neat rows, carefully separated; they wouldn’t come in bundles, as they do in the actual world (a beige wall conjoined with a bunch of other properties such as a certain height, weight, and material composition). The world of possibilities is more like the world of stars, planets, and galaxies, structurally speaking—both are laid out in space without any interpenetration. Possibilities are like trees in a forest, flowers in a garden, children in a school. Actualities, on the other hand, are like units of technology or universities or branches in a tree—inherently holistic and cooperative. An actual state of affairs is always a sub-unit of something larger. A possible state of affairs, however, is self contained, free floating, not beholden to other states of affairs. Actualization alters the mode of existence of the possible by linking possibilities together into larger wholes: it is a process of assembly. It thus ends the self-isolation of the possible.[3]


[1] This is the view associated with the work of David Lewis.

[2] The family members are just the logically related possibilities: the possibility of being both red and square always travels with the possibility of being red.

[3] From an abstract metaphysical perspective, the holism of the actual ought to strike us as more remarkable than we are apt to suppose. Actual reality is a kind of creative synthesis that contrasts sharply with the piecemeal nature of merely possible reality. The latter functions as a kind of disordered raw material for the former, mere ingredients for a would-be cake. Actualization is really a creative (almost miraculous) act, generating solid chunks of reality from languishing and idle elements. Possibility is like the formless gas that preceded the formation of stars and galaxies; actualization is like gravity in converting this unpromising stuff into something shapely and worth attending to. It is the holism inherent in actualization that makes reality as a whole interesting. If all we had were possible states of affairs, never actualized, reality would be a pretty sad and boring place; it is the actualization function that creates the world as we know it. Actualization is the root of everything worthwhile; mere possibility (pre-actualized reality) is a sorry business. Holistic actualization is the animating force of reality—the equivalent of divine creation. Without it possibilities are just aimless shut-ins going nowhere and communicating with no one. Being merely possible is a lonely and pointless kind of life; being actual is social and cooperative. When God was wondering whether to make possibilities into actualities he was wondering whether to inject reality with meaning (in one sense of “meaning”).        

Share

Logic and Morality

Logic and Morality

Are there any affinities between logic and morality? The question may appear perverse: aren’t logic and morality at opposite ends of the spectrum? Isn’t logic dry and abstract while morality is human and practical? Isn’t one about proofs and the other about opinions? I think the affinities are real, however, and I propose to sketch them. Both concern guides to conduct: how we should behave, cognitively and practically. Logic gives rules to reason by; morality gives rules for action. These rules purport to be correct—to yield valid reasoning and right action. Thus logic and morality are both normative: they tell us what we ought to do. They are not descriptions of what we actually do but prescriptions about what should be done. These prescriptions can take a number of forms: on the one hand, logical laws, rules of inference, and avoidance of logical fallacies; on the other hand, moral laws, rules of conduct, and avoidance of immoral actions. Thus we have the three classical laws of logic (identity, non-contradiction, and excluded middle) and the utilitarian principle, or a list of basic duties (corresponding to consequentialism and deontology). We also have rules for making inferences: modus ponens and the Kantian principle of universalizability, say—as well as warnings against fallacious inference (don’t affirm the consequent, don’t try to infer an “ought” from an “is”). Neither subject is concerned to establish “matters of fact” about the natural world; both are concerned to improve reasoning, make us better people, keep us on the right track. It is good to reason validly and to do what is morally right. Thus logic and morality are procedural and prohibitive, rule-governed and critical. We apply them to facts in order to produce desirable results—true beliefs, right actions—but they are not a form of fact gathering analogous to physics or history.[1] They are practical not theoretical. They are active and engaged not laid-back and detached.

            It would be wrong to contrast the two in respect of formality. It is true that we have formal logic as taught in university logic courses, while morality can scarcely claim anything comparable (though there is deontic logic). But the logic and morality I am talking about are pre-formal—they are embedded in our natural competence at dealing with the world and are probably innately based. Logical reasoning existed before Aristotle tried to codify it, and morality pre-dates attempts at explicit refined statement. These are primitive forms of human competence, not dissimilar to language competence before grammarians came along. The distinction between logic and morality is relatively recent and may not have been salient to early humans. We know quite well what is meant by the “ethics of belief”, and we are not shy about pointing out fallacies in other people’s moral reasoning. Sound reasoning is sound reasoning—and it is what we should be aiming at. The distinction between logic and morality is not as sharp as we tend to think these days (I suspect it is less sharp in the ancient world than in the post-Christian word). Shoddy logical reasoning is deplored, as immoral action is. You should keep your promises and you should follow modus ponens; we can worry about fine points of logic versus morality later. If we suppose that animals possess rudimentary forms of logic and morality, are they really distinct modules in the animal mind? Logic and morality bleed into each other.

            Where does prudential reasoning fit? It is surely only logical (rational) to consider one’s own future wellbeing—so we might assign prudence to logic. But prudence is also behaving well to one’s future self, so that it falls within morality. Some moralists have even supposed (I think rightly) that prudence is a special case of morality—we have moral duties towards ourselves, as one sentient being among others. So prudential reasoning is both logical and moral—it has a foot in both camps. Or should we say that the idea of a dualism of camps is mistaken? Isn’t the line more blurred than contemporary culture recognizes? We think there is no morality in logic textbooks and that moral issues can’t be resolved by formal logic: but that is surely too narrow a view of both fields. Logic is up to its ears in normative notions, and morality is a domain of logical reasoning. If you are trying to resolve a complex moral issue, such as abortion or animal rights, you will find yourself invoking principles drawn from logic and from normative ethics—as we currently understand those fields. But from a ground level perspective these distinctions are blurred and irrelevant: you are just reasoning with whatever bears upon the topic. You are applying your logical and moral competence to a real world problem with a view to doing what is right. When you avoid deriving an “ought” from an “is” are you doing logic or morality? When you declare that all sentient beings have rights is that intended as a moral principle or a logical one? It functions as an abstract axiom used to draw conclusions—it is irrelevant whether it crops up in a standard logic text (they don’t even include modal logic). We shouldn’t have too narrow a view of logic, and we shouldn’t neglect the abstract character of much moral reasoning. I am inclined to say simply that moral reasoning just is logical reasoning—logical reasoning about questions of value.[2]   

            What about the point that logic is fixed, rigid and universal while morality is changeable, fluid and relative? Isn’t morality controversial and logic indisputable? But this is a naïve and tendentious way to think: logic has its controversies and morality is a lot more universal than many people suppose. I won’t rehearse the usual criticisms of moral relativism, subjectivism, emotivism, etc.; suffice it to say that morality is really a subject in objectively good standing. Also, logic is not free of internal strife: some find modal logic suspect, others favor intuitionistic logic, still others adopt a highly inclusive conception of logic (even accepting logical contradictions). Not much in human thought is not controversial in some way. Skeptics of the normative will object to “ought” in both logic and morality, but that simply underlines the affinity of the two areas. The essential point is that both logic and morality are normative systems designed to facilitate desirable outcomes; and both admit of a degree of formal articulation rooted in intuitive human faculties. We can all agree (if we are sane) that pain is bad and everything is identical to itself: why one should be assigned exclusively to something called “logic” and the other to something called “morality” is obscure. Both are self-evident propositions capable of functioning as axioms in a train of reasoning: the affinity is more obvious than the difference. You should keep your promises and not be cruel; you should existentially generalize and not affirm the consequent. Where exactly is the deep difference here? And this is before we get to non-standard logics like modal logic, epistemic logic, indexical logic, and deontic logic. They are all about reasoning and validity—but so is morality about reasoning and validity. If morality is about moral reasons, it is about moral reasoning: but then it is a logical enterprise. Logic is capacious enough to subsume morality, being the general theory of sound reasoning.

            Recognizing the affinity is helpful in resisting certain deforming conceptions of moral language and thought. Emotivism, prescriptivism, naturalism, psychologism, non-cognitivism, Platonism, contractualism: are any of these remotely plausible when applied to logic? People have toyed with such accounts of logic, but generally they have not found favor, so why should we seriously entertain them for morality? Moral discourse is like logical discourse—objective and normative—and should be treated as such. It is what it is and not some other thing. Thus its similarity to logic can work to legitimate it and avoid procrustean and reductive reinterpretations. Notably, the difficulty of finding justificatory foundations applies to both areas—some things just have to be taken for granted (pain just is bad, modus ponens just is correct). And in so far as logic should not be construed as a descriptive science of the platonic realm but as a normative system of rules of correct reasoning, so morality should not be thought of as describing the Good but as a normative system of rules of right action. If we cleave to the logical analogy, we can steer our way through dubious assimilations and deformations. Just to simplify matters, I recommend that we assert outright that morality is logic (part of it anyway): that way we have a neat antidote to various bad ideas about morality. This does not remove all philosophical questions about morality, but it raises the right kinds of questions. Logic, too, raises real philosophical questions, both metaphysical and epistemological, but these questions are the right ones to raise concerning morality. Morality, we might say, has a logical structure—and a logical role. It functions logically. Logicism is true of it (“moral logicism”). But I also think that logic needs to shed its antiseptic image and confess to its normative heart: it is really about how we should reason, what goodreasoning is. To be sure, we can treat logical systems formally, as mathematical objects; but the thrust of logic is prescriptive and critical—evaluative. It is concerned with a certain human value (viz. good reasoning), and therefore naturally belongs with morality. It is part of “value theory”. Accordingly, it belongs with such practices as praise and blame, conscience and shame, reward and punishment, respect and disrespect.[3] An illogical person is not a moral person; irrationality is a vice. Moral goodness and logical goodness are inseparable attributes, seamlessly connected; indeed, we shouldn’t even speak in such disparate terms. The distinction between logic and morality is an untenable dualism, an artificial separation.[4]   


[1] Thus the tradition has supposed both logic and morality to be known a priori, possibly to be synthetic a priori. This leads to reactionary attempts to demystify them: logic consists only of tautologies and morality is cognitively empty.

[2] In fact, I don’t believe there is a non-arbitrary definition of logic (or of logical constant) that separates off one kind of entailment from others, but I won’t go into this now.

[3] The awe and reverence Kant felt for the moral law has its counterpart in an idolatry of logic—as if logic has a godlike status (there should have been a Greek god dedicated to logic). This is logic as something sacred and sublime (to use Wittgenstein’s term).

[4] It may be useful for designing an academic curriculum but it doesn’t capture the real nature of our logical and moral being. Both are expressions of our underlying rationality.

Share