Observation and Scientific Realism

                                   

 

 

Observation and Scientific Realism

 

 

Positivism, following empiricism, maintained that the real is coterminous with the observable. A scientific theory that posits unobservable entities cannot be taken at face value, but must be regarded as merely instrumentally useful or as plain false. The observable entities are real enough, but the unobservable ones are a species of fiction (no one has observed Sherlock Holmes). There is some irony in this position for a reason strangely neglected: observations are not observable. They ought therefore to be unreal according to the criterion of reality advocated by positivism. An observation is a perceptual occurrence—an impression, as Hume would say—but such occurrences are not themselves perceptible by the senses. No one can see what I experience when I make an observation. You might say that I at least can observe my observation, but that doesn’t make a dent in the underlying point: first, I don’t observe my observation—I merely know about it by introspection; second, such knowledge is private to me and not available to you—you don’t have this kind of knowledge of what I experience. Observation lacks inter-subjectivity in the sense of being a publicly observable occurrence: it is a private occurrence occurring in an individual mind. So the positivists are making something a test of reality that lacks the marks of the real by their own standards.

            Observation is a human achievement: what we can observe is constrained by the acuity of our senses, our position in space and time, and our powers of discernment. Is the sun observable? Most of the time it is not, since we can’t stare at it without damaging our eyes. Is it thereby unreal at those times? Does it become real when we don dark glasses? Does a train become unreal as we watch it vanishing into the distance? Of course not: these are just facts about the limits of the human senses. Why would the human senses, confined as they are, impose any limits on what is real? What can you observe with your ears? Sounds, certainly, but can you observe the objects that make these sounds? Not with your ears, but does that mean such objects are not real? Would anyone suppose that the nose is the arbiter of reality? Observation always seems to mean visual observation, but that too is hardly suited to qualify as the measure of reality. The fact is that human vision, like the other senses, is limited in acuity, stimulus bound, prone to illusion, subjectively shaped, modular, and species specific. Why should reality be circumscribed by what is so constituted? Vision may provide our best evidence for what is real, but it can’t be what determinessomething as real; it can’t be the definition of the real.

            And what exactly is observation anyway? The OED defines “observe” as “watch attentively”. Note the restriction to the visual sense (we can’t watch with our ears or nose), but the addition of “attentively” is also important. Observing is not simply seeing or even looking; it is doing so while attending to what one is seeing. If you are not attending, or have no power of attention, you cannot be observing. So the positivists must maintain that the test of reality is whether something can be attended to (by humans). But attention is limited, sporadic, and labile—unlike reality. Also attention is more top-down and cognitive than mere seeing: we attend to what we deem important, what we are interested in, what enthralls us. Desire drives attention not merely the perceptual stimulus. Is reality dependent on human desire? Does it care about what interests us? It is a psychological fact about an observer that he or she is watching something attentively; that has nothing to do with whether reality contains a certain type of entity. Is observation necessarily bound up with consciousness? The notion of unconscious observation does not seem oxymoronic, and psychologists have found evidence that it occurs (subliminal perception experiments). Couldn’t a person with blindsight make observations? Then the positivist would have to agree that things exist that cannot be consciously observed. How well does that sit with the empiricism animating their position? Doesn’t it show what a frail reed observation is as a foundation for existence? What if scientists all had blindsight and never consciously observed anything—would the positivist still say that reality is fixed by what they can unconsciously observe? Isn’t that just silly? Why should reality be beholden to the visual system of a certain species with limited powers of sight? Why confuse psychology with physics?

            This kind of scientific anti-realism looks completely hopeless.  [1] The point I want to make is that other types of anti-realism are not open to the kinds of objection I have just raised (though they may be implausible on other grounds). For it is glaringly obvious that the existence of unobservable entities is a discovery of science. The microscope, the telescope, and the diffraction chamber greatly expanded our knowledge of the full inventory of the world: microorganisms, remote celestial objects, and invisible atoms (and their parts) became part of accepted reality. Science has discovered that there is more to the world than the unaided human senses can reveal. Moreover, the human senses are themselves objects in the world, with built in limitations, biases, and breakdowns. The picture of the world as extending beyond the reach of the senses is an achievement of science itself. According to science, then, scientific realism is true—at least in the sense that reality is not coterminous (or coeval) with what is (directly) observable by means of the senses. We exist as limited creatures in a larger world not of our own making and containing many things not evident to our senses—things we can’t perceive simply by opening our eyes and looking. But none of this can be said about other areas in which realism and anti-realism have been debated. It is not a theme of morality that moral realism is true. It is not a theorem of mathematics that platonic realism is true. It is not a thesis of psychology that psychological realism is true. It is not a commitment of our ordinary conception of the physical world that idealism is false (Berkeley was right about this, Doctor Johnson was wrong). It isn’t that realism in these areas isn’t a fact about them; rather, realism isn’t a proposition asserted by these areas. Morality may have discovered that slavery is wrong or that animals have rights, but it is not a discovery of morality that moral truths are objectively true independently of human desire or thought. That is a discovery (if it is one) of philosophy. Likewise, the common sense view of the so-called material world is compatible with various kinds of anti-realism about it, which is why we can’t refute idealism by pointing to what common sense has established (or science). Maybe realism is by far the best interpretation of these areas, but it isn’t that they themselves assert its truth. By contrast, we can say that science itself has established that (unaided) observation does not encompass all existing entities. In this sense, realism is internal to science—but external to the other areas mentioned.

            The interest of this point is less that science has established scientific realism (in the limited sense defined)—for that seems obvious enough—but rather that other kinds of realism cannot be demonstrated in the same way. It would be bizarre to suggest that morality itself has established that moral values are objective and hence “queer”—as if subjectivism could be ruled out as morally wrong, i.e. contrary to the first-order principles of morality. It is not as if the Ten Commandments contain an extra one stating, “The other commandments are all objectively true”. The moral anti-realist may be an error theorist, but he does not have to be, given that morality asserts of itself that it is objectively true. Moral realism is a metaphysical position not a moral position. So there is no analogue of the role of observation in deciding the question: we haven’t discovered that there are unobservable moral entities in the course of our moral deliberations. There is no moral microscope that has revealed to us a world of values previously unsuspected. And similarly for other areas in which realism has been debated: no discovery within these areas will enable us to settle the question of realism versus anti-realism. This is why we speak of meta-ethics, and could equally speak of meta-psychology, meta-physics, and meta-mathematics.

            Here is another way to put the point: it might have turned out that there are no unobservable entities (it is an epistemic contingency that there are), but it couldn’t turn out that moral values are subjective (given that they are actually objective). It is an empirical fact that there are unobservable entities, but it is not an empirical fact that values are objective (given that they are). We didn’t discover empirically that moral realism is true—assuming we did discover that—but rather did so on philosophical grounds, i.e. a priori. By contrast, we did not know a priori that the world contains unobservable entities (microorganisms, atoms, distant galaxies); this we discovered by empirical means. We used science to establish that the world extends beyond the observable. But we didn’t use morality to establish that moral realism is true (for one thing, moral realism is not a moral duty); for that we resorted to philosophy. Thus, given that moral realism is true, it could not have turned out otherwise, whereas scientific realism may have turned out to be false (it is only a contingent fact that some things are not observable). There are worlds “qualitatively identical” to our world that contain no unobservable entities, but there are no worlds “qualitatively identical” to ours in which moral anti-realism is true (similarly for the other kinds of realism).  [2] This is because realism, if true, is true a priori in these areas. It is not an epistemic necessity that microorganisms exist but it is an epistemic necessity that values are objective (assuming they are). Thus scientific realism has a different epistemic status from that of other types of realism.

            There is an explanation from within science of the fact that unobservable entities exist, but there is no explanation from within morality of why values are objective (similarly for the external world, psychology, and mathematics). Some entities are simply too small to be seen given the limited acuity of the human eye, and some are too distant: these entities cannot interact with the eye psychophysically. Physics and perceptual psychology explain why we can’t observe certain things. But morality has no explanation for why moral values have objective existence—why they are not reducible to human attitudes. Nor can psychology, as an empirical science, explain why mental states are not reducible to dispositions to behavior (or some such). Nor can common sense concerning tables and chairs explain why material objects are not reducible to sense data. The question of scientific realism, understood as a dispute about the existence of unobservable entities, is not a properly philosophical question, since it can be settled by appeal to the discoveries of science. That is, we know scientifically that there are things in the world that can’t be perceived by the unaided human senses. Of course, there is plenty of room for philosophical debate about the nature of these entities (they might be ideas in the mind of God, say), but it is not a philosophical thesis that some things cannot be perceived. That is simply a scientific truth. But it is not a moral truth that moral values are objective, or a mathematical truth that abstract entities exist, or a truth of psychology that mental states are inner states irreducible to behavior, or a truth of common sense that tables and chairs are distinct from states of mind (“ideas”). Hence there is no analogue of the demonstrable inadequacy of observation to provide a test of reality that can defeat anti-realism in these other areas.

 

C

  [1] I am not saying that all types of scientific realism are completely hopeless (though I do believe that), only that the kind that equates the real with the observable is. Clearly the positivists advocated this position in order to save the principle of verifiability not because it looks intrinsically plausible. And there is no limit to human anthropocentrism. 

  [2] I am here using the apparatus developed by Saul Kripke in Naming and Necessity.

Share

Noumenal Powers

 

 

Noumenal Powers

 

 

Suppose you hold that the world consists of powers all the way down: all properties consist of causal powers.  [1]Now combine that with Hume’s position on our knowledge of powers: we have no impressions of powers, and hence no adequate conception of powers. Then you are committed to an extreme Kantian view of objective reality: the world is noumenal, because powers are. That is, we have no knowledge of reality as it exists outside the mind; at best we have a kind of structural knowledge of how things hang together, but no knowledge of the intrinsic nature of things. For that nature is essentially a matter of powers, and we cannot conceive of powers as they are in themselves; we are at most acquainted with mere signs of powers, ultimately what happens in our minds as they interact with reality. Thus the world of appearance is cut off from the world of reality: we seem to experience categorical observable properties of things, but objectively reality consists of unperceivable powers. Hume himself didn’t identify all properties with powers, but he did hold (in effect) that causal connections are noumenal; extending his epistemology to a wider metaphysics of properties-as-powers yields extreme Kantianism about reality as a whole. If the world consists of powers, and we can’t form an adequate conception of powers, then we can’t form an adequate conception of the world. We can’t form an adequate conception of what the world fundamentally is.

            Two objections might be made to this sweeping metaphysics and accompanying epistemology. The first is that it is implausible to identify the world with a constellation of powers—there have to be grounding categorical properties somewhere in the picture, because powers cannot stand unsupported. Let us concede the point for the sake of argument: it doesn’t follow that we have adequate ideas of reality, since the grounding properties might be unknown to us. It suffices for the argument that all observable or known properties are powers: even if there are categorical properties at the bottom of reality (belonging, say, to some advanced physics), all the properties we know about consist in powers. So everything we know of the world turns out to be an elusive power—shape, size, color, etc. What we think of as ordinary properties are really powers whose inner nature we cannot discern.

            The second objection is that what happens in our mind cannot consist of powers, because we do know what happens in our mind—we know, say, what pain is, and that it is occurring now. But the powers view of properties can be extended to mental properties by distinguishing appearance from reality: we don’t have an adequate conception of what pain is, but we do know how it appears. Mental properties have causal powers as much as physical properties do—they are individuated by their causal powers—but these powers may occasion only signs of themselves in what passes before consciousness. Pain itself might be noumenal—as is the self on the Kantian conception. The mind might be a hidden reality remote from introspection, made up of powers of which we have no adequate conception (if we follow Hume on our knowledge of power).

            What is true is that appearances cannot be construed as presentations of properties if the (generalized) Hume-Kant view is right. For we do know the nature of appearances, so they cannot be constituted by powers that transcend our grasp; they cannot be collections of properties that present themselves to the mind. No property can be presented to the mind as it is, if all properties are really powers that are inherently inaccessible to the mind. Once we accept the elusiveness of powers, along with the doctrine that properties are powers, we are landed in the kind of epistemology adumbrated by Hume and Kant (Kant merely generalizing Hume). Thus the stakes are high; and idealism threatens. The ubiquity of powers leads to a Kantian view of our knowledge of reality once Hume’s critique is accepted. If powers are metaphysically basic, yet epistemologically elusive, we end up with an unknown reality. What is known is mere appearance from which the notion of property has been expunged (if it can be).

 

  [1] The work of Sydney Shoemaker on properties is an instructive reference, but the view has a wider currency. Phenomenalism in effect holds that all material object facts consist in powers to produce sense experience, and behaviorism holds that mental facts are identical to powers to produce behavior. Generally, it is the idea that reality consists of potentials—of what would happen if. Reality is best captured by conditionals of a certain sort. Everything is the power to do something.

Share

Notes on the Concept of Law

                                   

 

 

Notes on the Concept of Law

 

 

  1. Consider the sentence forms “It is illegal to A” and “It is immoral to A” where A is a type of action (we could also consider “It is impolite to A” and “It is imprudent to A”). These are superficially similar, syntactically and semantically. Both have the logical form of universal quantification: “for any action x, if x is of the type A, then x is illegal/immoral”. Both contexts are intensional to some degree: certainly not truth-functional and arguably referentially opaque—it may be illegal/immoral to kill human beings, but is it illegal/immoral to kill the most dangerous species in the universe (assuming these to be co-extensive)? Both sentences have normative entailments (or corollaries): one ought not to do what is illegal or immoral. But there are also important differences. You can say “It is against the law to A” but it sounds funny to say “It is against morality to A”. You can necessitate a moral principle but not a legal one: “Necessarily stealing is immoral” is true but “Necessarily stealing is illegal” is not. We can paraphrase the legal sentence with “It has been declared illegal to A” but we can’t paraphrase the moral sentence with “It has been declared immoral to A”, since it might not have been so declared. You can pass a law but you can’t pass a moral principle. And clearly the two types of sentence are not synonyms nor even express the same facts: “law” and “morality” do not co-denote. Despite these differences, however, the two domains are tightly connected, though the connection is controversial. Laws can be wicked and immoral but morality can’t be (as opposed to a person’s moral beliefs)—so laws can be criticized morally but morality (itself) can’t be. Nevertheless, laws at least purport to be moral and can be assessed morally—they are not beyond the reach of morality (as taste in food or clothes may be). So the distinction between them is not a complete severance.

 

  1. We can ask what kind of speech act is performed by uttering “It is illegal to A” as we can for “It is immoral to A”, and a familiar list presents itself. Is it a statement of fact (a “descriptive” statement), a command, an expression of emotion, a threat, a prohibition, a promise, an exhortation—or some (or all) of these? Moral utterances invite the same kind of list. In both cases these questions are separate from the question of semantics, specifically truth conditions, and are mainly beside the point (a given sentence with a fixed meaning can be used to perform an endless number of speech acts, semantics being separate from pragmatics). The question of truth conditions is the central question: are the two types of sentence true in virtue of the same kind of thing (fact, state of affairs)? And here there is a marked difference: laws hold in virtue of declarations of a certain sort, but morality does not depend on declarations. This is why a divine command theory of law is not a category mistake while it is for morality (Euthyphro could have been right about what makes something a law). According to “legal positivism” laws arise from human stipulations or decisions or agreements—legislative acts–and therefore they can come to exist at a certain time and go out of existence at a certain time (when they are repealed). But the immorality of stealing is not something linked to time and legislation in this way. We could put this point by saying that a legal system is a “social fact”—one created by a group of people who are responsible for its existence. But merely calling laws social facts doesn’t distinguish law from morality, since a moral system in a society is also a “social fact”: what distinguishes the two is that law has its origin in legislative declarations while morality does not.

 

  1. Some have supposed that “good” denotes a simple unanalyzable property, but no such view has been held for “law”. That is as it should be because it is not difficult to analyze the concept of law into several components (or no more difficult than other complex concepts such as knowledge or game). Thus we can venture the following definition: a law is a legislated norm backed by sanctions. That is certainly not true of a moral precept. We need to bring in sanctions because they are so characteristic of a legal system and because without them law has no bite—people won’t obey laws without sanctions. A possible world in which there is a system of law governing a society but there are no sanctions associated with it is not a real possible world. The sanctions provide prudential motives for action (morality provides its own motivation). A prime constraint on legal legislation is that contradictory laws shall not be passed, and there is a distinct possibility that this could happen if the legislators are not careful (contradictions are not always obvious). But there is no such danger from morality, which is internally free of contradiction, not being the result of human belief or declaration (like reality in general). A legal system is a kind of propositional artifact and it can have defects and gaps in it. Hence laws can be in principle inconsistent as well as immoral. This is why there is no plausible “legal realism” like moral realism—because law is not mind-independent: it is immanent in human practice.

 

  1. We should not exaggerate law’s independence from morality, distinct though these systems are. As noted, laws purport to respect moral principle and can be criticized for failing to do so. Also they arise from motives of a broadly moral nature: they are intended to serve the common good (or at any rate the good of certain preferred types of person). They are not stipulations made in a vacuum but designed to further moral aims. A dilemma has been supposed to arise here: either laws are inherently ethical or they are not. If they are, then law and morality are identical or overlapping domains; but if they are not, then law has no moral force and there is no such thing as legal obligation. It seems to me that there is a third way here: this is the idea that law constitutes a secondary morality existing alongside the primary morality. Law acts like morality without being morality, at least as morality is ordinarily conceived by philosophers. There is a rough analogy with primary and secondary qualities: the primary qualities characterize basic reality while the secondary qualities exist beside them in closer proximity to human sensibility. But both are qualities of objects; it isn’t that secondary qualities are not qualities at all. Similarly laws are moral edicts but they are not identical to more basic moral edicts. Thus we can readily convert a moral precept into a law, as when we declare stealing illegal (or slavery). It doesn’t lose its moral standing by being so converted; indeed it inherits that standing. But it now belongs in a separate cognitive system subject to different constraints and standards. We can imagine beings whose whole moral outlook is constituted by laws (indeed some humans are like this) and it would be wrong to declare them morally void. Children occupy this cognitive territory when their notion of morality is fixed entirely by the commands of parents. We really have two systems of morality in our heads, between which it is easy to get confused; it is not that law removes itself from the realm of morality and becomes completely value-free—as if it were nothing but so much social engineering. This explains why people are often so torn when they perceive certain laws to be fundamentally immoral: this is a conflict within their moral faculties not just a conflict between morality and the extra-moral. The analogy with etiquette may be helpful: are the rules of etiquette simply detached from moral rules, since they are certainly not identical to moral rules? No, because good manners are regarded as a secondary form of morality—parasitic perhaps but not devoid of moral clout. One really ought to have good manners (as socially determined) out of consideration for the feelings of others. We shouldn’t be “etiquette positivists” holding that good manners have nothing moral inherent in them, yet we shouldn’t simply identify etiquette with morality. We have a kind of secondary morality here, not an abrupt switch from the moral to the non-moral. We should picture our moral faculties as consisting of a central core of basic moral principles surrounded by a penumbra of outlying moral systems (habits, proclivities). Law, like etiquette, is an application of morality suited to certain ends, suitably supplemented and adapted. We need to be expansive and pluralist about the nature of moral obligation.

 

  1. Are laws rules? This is not a helpful way to think. They are clearly not like the rules of a game precisely because the practice of law is not a game. The rules of games prescribe (and proscribe) actions that aim to achieve ends by indirect and inefficient means (see Bernard Suits), but the “rules” of law don’t tell us how to play a game using such means—we must obey the law by the most efficient means possible. Nor is it clear what the purpose of legal rules might be—ditto for so-called moral rules. We can talk this way if we like but the theoretical or conceptual payoff is minimal at best, and is likely to promote forced analogies and misleading conceptions. Breaking the law is not like breaking the rules of chess: if you commit a murder it would be strange to be told that you are going to jail for life because you broke the rule against murder; rather, you are going because you murdered someone. The law is no more rule-like than morality.

 

  1. One’s reason for obeying the law can only be prudential (avoiding sanctions) or moral (the law codifies the good); there is no such thing as a specific form of legal obligation or reason for action. In the case of a law perceived to be wicked the only reason to obey it is prudential. However, since the law is a secondary form of morality it does allow for an extra layer of reasons governing our actions: for we now have two sorts of moral reason for acting. One hopes these harmonize (similarly for etiquette) but they might not and then one has a conflict within one’s overall morality. A part of you may judge that a particular law is too strict and inflexible in certain circumstances, going by your core morality, but you obey it anyway because you think that it is basically a good law not wide of the moral mark. The situation is not so different from what we find within core morality itself, because here too we have different systems that don’t always harmonize—as with deontological precepts and consequentialist principles. Arguably, we have two coexisting moralities within us, which don’t always see eye to eye; well, our attitudes to the law are similar in that our thoughts about the law are themselves morally suffused. There are many moral “oughts” not just one, and each occupies a place in our total moral outlook. So the dilemma “moral versus non-moral” as applied to the law is too simple. Legal moralism is thus to be preferred to legal positivism (construed as denying that laws carry any moral weight in themselves), though it is a mistake to try to reduce legal obligation to moral obligation (again, compare etiquette). In any case, there is no category of reasons for obeying the law beyond the moral and prudential, so nothing sui generis about legal obligation.

 

  1. There can be wicked moral beliefs and practices as there can be wicked laws. In the former case wickedness is relative to correct belief: moral reality can correct erroneous moral belief. But in the latter case we can’t say this, not with any plausibility anyway: if a law is wicked we can’t say that it is wicked relative to the correct law, as if this existed independently of human legal systems. There are no ideal laws that we are trying to capture and possibly failing to capture—a legal reality outside of legal belief and stipulation. True, one legal system can be superior to another, but there are no objective laws that set the standard—laws outside of human practice. Morality is the proper source of criticism of laws not supposed ideal Platonic laws. It is the same with etiquette—it is subject to moral criticism but not criticism from some supposed ideal set of rules of etiquette (as it might be, the etiquette of the gods).

 

  1. It may be useful to distinguish between laws as they exist objectively in various social institutions and laws as they are understood by people subject to them. After all, laws only get purchase on people’s conduct by way of their mental representation of them. And the law might have different functional properties in its two manifestations. People often have an imperfect understanding of the law as an objective institution while carrying around with them their subjective idea of what the law requires of them. A philosophy of law should address both topics. In particular, the authority of law really depends on how people understand it, i.e. their disposition to accede to its demands results from their own subjective representation of it. Maybe the external phenomenon doesn’t function as a moral system for people, but their internal representation may: this is where the secondary morality exists and operates. Internalized law functions as an ancillary moral system capable of providing moral reasons even if external law does not. How people think of the law does very often correspond to what they think the law ought be, and that is a moral “ought”. It is the same with rules of etiquette: what the prevailing norms objectively are in a social group is not the same as how individual people conceive of these norms (this can give rise to much social comedy). It is the latter that functions as a secondary moral system.

 

  1. I started by comparing “It is illegal to A” with “It is immoral to A” noting the linguistic similarities. But there is a semantic difference that makes all the difference: the former statement makes implicit reference to certain kinds of symbolic acts while the latter does not. An action is illegal only because it has been declared to be so by some authoritative body: being illegal requires being said to be illegal. So the original statement is tacitly metalinguistic, being equivalent to “Acts of type A have been declared illegal”. But the corresponding moral statement is not metalinguistic, because morality does not depend on human stipulation or decision (or divine). But this important difference does not preclude morality from entering into the law, as if it made the laws merely a set of facts about what people have declared. Laws must proceed from moral motives (possibly misguided) and can be criticized in the light of moral considerations; they are not value-free social facts. Law is better described as a secondary moral system linked to the primary one, though not reducible to it. Imagine a society that instituted a set of laws governing prudential behavior: you must eat this and not eat that, no going out in the cold without warm clothing, no watching too much TV, etc. The aim of these laws is purely to improve the individual’s wellbeing not to govern interpersonal relations. It would be strange to say that this system of laws has nothing intrinsically to do with prudence just because some of the laws might be misguided, and strange too to maintain that the laws are the same as the precepts of ordinary prudence. The laws are something additional to, but reflective of, the underlying precepts of prudence. In time they might become a secondary system of prudence, especially given that sanctions are applied for non-compliance.    [1]

 

Colin McGinn          

 

 

    [1] I wrote up these notes after reading Nicola Lacey’s biography A Life of H.L.A. Hart (2004). I have read nothing in the literature of the philosophy of law but found myself thinking about the questions raised by my reading of this book. These notes merely record my thoughts and reactions and make no claims beyond that. They are not intended for publication.

Share

Multidimensional (Inclusive) Semantics

                                   

 

 

Multi-Dimensional (Inclusive) Semantics

 

 

I address you today in a spirit of inclusiveness and diversity. For too long semantics (theory of meaning) has been the confine of a single type of entity held to constitute all that meaning encompasses (or a couple of entities, closely related).  We must broaden our horizons and recognize that many kinds of entity contribute to the overall significance of an expression, often emanating from different traditions and regions. Above all it is reference that has proved hegemonic, squeezing out other contenders for semantic acceptance. Whether that notion is phallocentric to boot I shall not venture to say  [1]; what I shall say is that we need a far more inclusive and diversity-driven approach to semantics. Semantics correctly conceived is a rainbow.  [2]

            It used to be that only reference (denotation) was admitted into the semantic club: the meaning of an expression was its denotation. This was the view of Lord Bertrand Russell, English aristocrat and logic whiz (Western logic). Definite descriptions had to be distorted beyond recognition in order to fit them into this narrow picture (a form of linguistic colonialism perhaps). In any case, this approach, hailing from John Stuart Mill, another privileged upper class Englishman (and we duly note the gender) held sway until a rebellious German, a certain Gottlob Frege, added an extra element to the story—what he provocatively labeled sense. This was an improvement, breaking the stranglehold of the English referential aristocrats, but sense was conceived as the mode of presentation of the reference; so reference was still occupying center stage, with sense acting merely as its reflection or image, i.e. how we view reference. (Can we say that while reference is the phallus sense is its codpiece?) Still, the basic monism is firmly in place: semantics remains one-dimensional, or at least one-and-a-half-dimensional. Not till Ludwig Wittgenstein arrived (also a white male aristocrat) was this monism seriously questioned and a certain kind of pluralism put in its place—with all the variety of language emphasized and celebrated. This was a welcome development in the openness of semantic studies, even allowing for the existence of actual workingmen (those builders of the early Philosophical Investigations—though again we must note the gender bias). But instead of embracing diversity the Austrian aristocrat insisted on imposing a new one-dimensional hegemony—all meaning is use. Reference drops out of the picture entirely, as if use has ousted it altogether. We don’t have use and reference but use and not reference. The old exclusiveness survives in a new form, less rigid perhaps, but with the same drive towards uniformity. One half expects the use to be restricted to only the most privileged of users! This entire trajectory then reaches its climax, i.e. nadir, in the person of Sir Michael Dummett, a white male Oxford philosopher, whose main mantra is that everything about meaning should be explained by one central concept—such as truth or verification. There could not be a more blatant hegemony! Nothing is to be included in meaning except what can be subsumed under a single conceptual category: you are welcome to join the semantic club, but only if you are properly related to the concept of truth (or verification). No diversity allowed!

            At this point I shall drop the political backstory and proceed immediately to theoretical matters, though I trust my enlightened readers to keep that political context always in mind. And let me lay my cards on the table right away: I am all in for maximum semantic inclusiveness with as much diversity as possible (within reason of course). Not just two-dimensional semantics, or even three or four, but many dimensions, indefinitely many—as many as we can come up with. Fortunately, we have this diversity already lying around—it requires no strenuous inventing on our part. I have prepared a long list: reference, sense (mode of presentation), tone, character and content, intension and extension, grammatical mood, inferential role, rules, stereotype, mental image, individual and social understanding, ideas, brain states, use, conceptual analysis, truth conditions, criteria, causal chains, and whatever else comes to mind. For my contention is that all of these may be reckoned to the meaning of a word or sentence: not one of them and not the others, but the whole lot. They don’t exclude each other but coexist peacefully. For example, a proper name, say “Aristotle”, has reference, sense, an intension and extension, a character (constant in all contexts), a role in inference, an associated stereotype (“bearded cogitating Greek man”), individual grasp and socially agreed grasp, a use, a contribution to truth conditions, criteria of application (see stereotype), a causal-historical chain, even a tone (vaguely distinguished and admirable). From among this variegated list we may pick out sense and intension for instructive contrast: the former is defined in epistemic terms (mode of presentation and interchangeability in belief contexts) while the latter is defined in modal terms (functions from worlds to extensions). These are by no means the same notion, but they equally belong to a single name, existing side by side in perfect harmony. There is no point in arguing that one is the real meaning and the other a mere impostor: both belong to the overall semantic significance of the name. Both are attributes the name has, and they clearly flow from what it means (not what it sounds like). Meaning is multi-dimensional, diverse, and inclusive. No doubt there are interesting relations of dependency between these various elements, which may be studied, but the plurality is irreducible—part of meaning’s rich pageant. We can even throw in some Meinong-style ontology if that is to our taste, assigning to so-called empty names a subsistent entity as reference, or what is called an ”intentional object”. A committed Kantian might insist that reference be divided into phenomenal reference and noumenal reference. A follower of Sir Arthur Eddington might propose a double reference for “chair”: the commonsense chair and the chair of physics. The possibilities are endless, to be considered on their merits; but they should not be rejected simply because of some presumed one-dimensionality in meaning. In the theory of meaning our adage should be, “The more the merrier”. Plurality is a sign that we have not omitted anything not a symptom of conceptual chaos or indecision.

            It may be remarked that the situation in other departments of linguistic theory is already happily pluralist. Consider the theory of syntax, taken to include the study of the sound system of a language. There is no one central concept here to which others must bow down; instead there are layers and dimensions. We can study speech as an acoustic phenomenon (as with a speech spectrograph), or as an articulatory system, or as embodied in the brain, or computationally. None of these competes with the others; all are legitimate and important. Syntax more narrowly conceived is typically understood as consisting of layers of rules, which may be viewed computationally or in terms of brain mechanisms. These are all aspects of the “formal” properties of language, and they all coexist—people don’t go around complaining that someone else’s pet theory isn’t really about syntax. Syntax isn’t one-dimensional. Similarly, in pragmatics there is room for a diversity of perspectives—not a single overarching concept. Thus there is no inconsistency between Gricean, Austinian, and Wittgensteinian approaches to (philosophical) pragmatics: all can be true and illuminating in their different ways. After all, there are many aspects to the employment of language by people, and we should not expect to be able to subsume all them neatly under a single heading. For example, an utterance of “Shut the door!” may be made with Gricean intentions, while having an Austinian perlocutionary effect, and occurring within a Wittgensteinian language game. Then too, we may approach pragmatics from an individual’s perspective, studying the way language is used as a tool of thought (say), or we can approach it socially, studying how language is used in interpersonal communication. There are indefinitely many possible ways to do pragmatics, as there are multiple ways to do syntax; and there is no reason semantics should be an exception. There are multiple components across the board. The fact is that the list of concepts I gave represents a variety of insights into meaning on the part of different thinkers, each valuable in its own way, and there no necessity to reject some in favor of others. I don’t mean to say that no semantic theory can conceivably be false, just that the fault is usually incompleteness not outright error. Apparent inconsistencies often melt under more tolerant investigation (as with Fregean versus Kaplanian approaches to indexicals). I used to be all in favor of “dual component” semantics, but really we should expand the dimensions dramatically to accommodate everything that characterizes meaning. The concept of meaning is a multi-dimensional concept incorporating a large variety of factors. It is not a simple thing like being square or red; it is more like the concepts of democracy or marriage or success. It contains multitudes.

            Let me return to my political platform, because I was not being entirely frivolous (though mainly so). In ethics there has historically been a tendency towards monolithic theories, as with utilitarianism and Kantian ethics. It was left to more ecumenical ethicists like W.D. Ross to advocate a pluralist reconciliation between these apparently competing systems, thus producing a multi-dimensional ethical system. It is easy to see this development as an integration of different political perspectives—the pure will of the privileged autonomous agent versus the maximization of happiness in a suffering population. In the case of semantics we also have a politically contested domain, because language is spoken by diverse groups of people each with their purposes, positions, and ways of life. It would not be amazing if a certain kind of linguistic hegemony were in effect according to which only certain aspects of meaning are deemed “proper”, the rest consigned to illegitimacy and disdain. Hence we get the idea of the logically perfect language. The messy reality of meaning might not receive its due recognition because of an ingrained habit of favoring some things over others. There is always something evaluative in theories of meaning, as if only a certain dimension is deserving of respect. Why has tone not received the attention it deserves? Could it be that its prime examples are racial slurs and sexist language? Why would people want to explore the expression of their own prejudices and hostilities? Speaking very broadly, there is something democratic about meaning: everyone speaks no matter his or her social class or place in society, and meaning itself combines disparate elements jostling together. Oversimplifying culture from political motives is not so far removed from oversimplifying language from similar motives. The habit of exclusivity is deeply rooted and ubiquitous. At the least it can operate as a factor in determining what theoretical options people tend to take seriously. Semantics is political too.  [3]

 

Colin McGinn              

           

  [1] I have no wish to wax psychoanalytic, but isn’t the notion of reference suspiciously phallic (at least as phallic as some of Freud’s phallic symbols)? It seems to involve a kind of mental protrusion, as the act of reference extends outward to make contact with objects in the environment. People sometimes talk of reference as like tentacles reaching out to grasp, but other organs of the body can reach out and make contact too. And what about pointing? The pointing finger has a rigidity and angle not unlike… And then there is “rigid designation”, a phrase that trips suspiciously easily off the tongue. Just saying.

  [2] Light can appear homogeneous, but the rainbow resolves it into an array of separable hues. Meaning can seem homogeneous too until we resolve it into its components.

  [3] For all I know intellectual traditions from beyond the West have suggested aspects of meaning Western thinkers have missed. If so, I cordially invite them in.  

Share

Muddy Waters

                                               

 

 

 

Muddy Waters

 

 

Causation is one of those philosophical topics that drive you up the wall. As soon as you start to think about it you draw a complete blank. As Hume observes: “There are no ideas, which occur in metaphysics, more obscure or uncertain, than those of power, force, energy, or necessary connexion” (Enquiry, p.45).  [1] The cement of the universe is so much muddy water. Of course philosophers have done everything in their power to hide this fact from themselves, even going so far as to try to reduce causality to mere regularity. Hume’s own view was that causation is real but incomprehensible (by us). It is neither an affair of the senses nor of reason: we have no sensory impression of necessary connection (which is definitive of causation) but neither is causation grasped by reason (like logic, arithmetic, and geometry). It is a real relation between things but it is not revealed by perceiving them or by merely thinking about them. It fits neither empiricism nor rationalism. It sits uncomfortably between the two, awkwardly and inscrutably. No matter how much you gaze at an object or reflect on it you will never discover causation (as it exists in that object). But perception and reason are our only faculties of knowledge, so the mind draws a blank on the nature of causation. Yet we constantly refer to it, rely on it, and assume its reality. Evidently we can know that it obtains, relying on the observation of regularities of nature, but we can’t fathom its inner nature—or even fathom our lack of fathoming.

            The problem concerns not just particular causal relations but also the notions of law and power (disposition, capacity, potential). Objects fall under causal laws and have causal powers: this is why they have the effects they have. But laws and powers are at least as inscrutable as causal relations between particulars; this is an interconnected knot of problems. Perhaps the notion of power concentrates the problem most acutely (as Hume intimates): how are the potential effects of a cause contained in it? Are the effects somehow already present in the cause? Does the cause “refer” to the effects? Are there shadows or signs of the effects lurking silently in the cause? But you can’t discern anything like this if you examine the cause, even going down to its elementary constituents. When a moving object imparts motion to another object by collision is the other object’s motion somehow prefigured in the moving object? It had the potential of creating that effect, so doesn’t it already contain it in someway? What is potential? It’s a bit like the way the meaning of a word “contains” its uses: they are implicit in it, packed into it—but what does this way of speaking amount to?  [2] The problem of causation is how an object can contain what it does not contain. If we think of a cause as a conscious being for a moment, it is as if it knows what effects it can bring about, but only unconsciously: it doesn’t have these effects before its consciousness, but it is subliminally aware of them—they are implicitly known not explicitly known. But that looks like a dodge: they aren’t anticipated in any way that we can discern—the mind of a cause is blank about its future effects. Yet it has the power to bring precisely these effects about, and this power is internal to it, so… Thus the waters fill with mud.

            What can we say positively about causation? The logic, semantics, and conceptual analysis of “cause” are not so baffling. Thus “x caused y” expresses a relation that is irreflexive, asymmetric, and transitive: nothing can cause itself, effects can’t cause causes, and the effects of effects are caused by the initial cause. Semantically, it is plausible to suggest that “cause” generates a transparent context and expresses a relation between events (though this view is not without its critics). The concept may also be analyzable in terms of counterfactuals or other necessary and sufficient conditions. So it is not that we can say nothing about the word and what it means—and much philosophical energy has been expended on these worthy tasks. But they don’t touch the underlying metaphysical and epistemological questions, the ones so memorably raised by Hume. What exactly is causation, and how do we know about it? Specifically, what is it to have a causal power, and how can we know causal powers? There are suggestions—there are always suggestions. One suggestion is that a causal power is identical to a structural property of the object, as it might be molecular structure. But this just pushes the question back: how is the power present in the structure? Isn’t it as invisible as ever? Nor can it be excogitated by pure reason. It can’t be seen and it can’t be deduced—it flouts both empiricist and rationalist epistemology. It isn’t a posteriori and it isn’t a priori. It is a peculiar kind of fact, being neither perceptible (even by extended perception: microscopes etc.) nor rationally apprehended. As Hume would say, it is neither a “matter of fact” nor a “relation of ideas”; it hovers ambiguously between the two. No wonder there has been a marked tendency towards elimination: causation must either be reduced to facts less problematic (regularities, dispositions to project) or eliminated outright. To accept it as it is runs into insurmountable metaphysical and epistemological difficulties. Indeed, it threatens to bring down the most fundamental structures of philosophical thought.

            So why not just bring them down? Because we have nothing to put in their place, that’s why. It is not as if we have some other way to think about causation that we can substitute for the old inadequate dichotomies: the waters are thick with mud and our vision fails us. It is really a horrible problem. Best not even to go near it; just leave it alone to fester. But maybe we can articulate the problem better, gain a better sense of its dimensions and density. Can we at least pinpoint why it is so difficult? It is in some ways worse than the problem of consciousness, because in that case at least we know what we are talking about—we don’t just refer to consciousness, we experience it. But we don’t experience causation (as opposed to its symptoms), despite our readiness to refer to it. We refer to a we-know-not-what. Appealing to mental causation won’t help, despite our immediate acquaintance with the mental phenomena between which causation holds: for mental causation is as opaque as physical causation (as Hume noted). We can say that physical causation is no less active than mental causation; the will is not somehow a livelier form of causation. Nor is causation by physical contact more transparent than causation-at-a-distance, since its operation is as obscure there as it is in the remote case. In this respect old-style mechanism offers an illusory paradigm of transparency (this was Hume’s central insight, in effect): it isn’t that causation by contact is quite clearly grasped while causation-at-a-distance must be deemed “occult”. Neither is really intelligible to us, not when you get right down to it. Hume’s billiard balls hit each other, unlike orbiting planets, but their causal powers are no more evident to sense or reason than gravity. For some people this was taken as a reason to eliminate causation altogether from physics, and one can appreciate the motivation.

            Can we be more constructive? I think we can say two positive things, though nervously. The first is that nature must be more tightly interlinked than we tend to suppose going by the appearances: causation connects things because any effect of a cause must be somehow written into the cause (though not in a way we clearly conceive). The colliding billiard balls don’t appear to sense perception as having any intelligible connection; nor can human reason discern any such connection: but they must somehow be intelligibly linked. The laws of nature essentially relate separate things, because causal powers are essentially powers to bring about certain specific effects: an object x has the power to make an object y have a property P. There is thus more “holism” at work in nature than is apparent to our epistemic faculties. We could introduce the idea of the “causal boundary” of an object to signify the class of objects that fall within its causal reach—for instance, the class of solids a given liquid can dissolve. This class falls within its causal boundary but not its spatial boundary. Then nature will be said to consist of the totality of such causal boundaries—these are the true units of nature.

The second thing we can say is that whatever causal powers are they must be very different from their manifestations in observable phenomena. This is because the manifestations never add up to a causal power, as it exists in objects. It can’t be mere regularity and it can’t be a “categorical base” (e.g. molecular structure)—these are not what the power is or else we would know what it is. Powers must be as different from their manifestations as mental states are from behavior—perhaps more so. Potentiality must be different from actuality; yet the two must be intimately related. I can’t tell you how potentiality differs from actuality because of its obscurity, but it evidently does differ, dramatically so. (Or else actuality is merely the way potentiality looks to our senses and doesn’t go deep ontologically speaking.  [3])

            Would other things become clearer if we had a better grip on causation? Anything in which causation directly figures would be—laws of nature, the origin of the universe, the operation of fundamental particles. But it might also help with the mind-body problem and the free will problem: How is the mind caused by the brain? How are free actions caused? Certainly this is an enormous gap in our understanding of nature, what with the ubiquity of causation, but a gap there seems little prospect of filling. The water may remain forever muddy.  [4]

 

  [1] I will be accepting the “skeptical realist” view of Hume in what follows, according to which causal powers are real existences that defy our limited understanding, not the positivist interpretation of Hume according to which the concept of causal power is incoherent and should be rejected.

  [2] I am alluding to Wittgenstein’s discussion of meaning and use in Philosophical Investigations. He explicitly connects this issue to that of causal powers in the sections on machines (193, 194).

  [3] Such a view is suggested by the thesis that all properties consist of causal powers. Then there is nothing to nature but powers.

  [4] This essay is intended to reflect the inadequacy of our understanding of causation, containing little in the way of genuine illumination. But perhaps it serves to scratch the depths.

Share

Moral Subjectivism Defeated

                                    Moral Subjectivism Defeated

 

 

 

Moral subjectivism claims that what we think of as moral values reduce to moral beliefs: things are wrong because we believe they are wrong. It is not that we have moral beliefs because of moral facts, which may be cited to justify the belief; rather, the so-called moral facts are just our moral beliefs. To say that murder is wrong is to say that we believe it is wrong. In the case of a solitary individual the values he accepts are simply what his value beliefs happen to be. There cannot be any divergence between moral facts and moral beliefs, since moral facts are moral beliefs. But this position faces the following question: how does the moral agent set about justifying his moral beliefs? If you ask a moral objectivist what justifies his moral beliefs, he will answer by citing a moral fact—say, that murder is wrong. But if you ask a subjectivist the same question, he has no resource other than to say that believes a moral proposition. He believes it because…he believes it. But that is no justification: a belief cannot be justified by itself. It must appeal to something other than itself as justification—it can’t be its own reason. So a consistent subjectivist has to abandon moral beliefs, perhaps leaving only “gut feelings” that don’t call for justification. This doesn’t imply that objectivism is true, only that it must be taken as true even by an avowed subjectivist—unless moral beliefs are abandoned. Moral belief logically requires commitment to moral objectivism, i.e. the denial that values reduce to beliefs. There must be more to morality than moral beliefs on pain of excluding justification for those beliefs, and hence abandoning the beliefs. Don’t say the beliefs are basic and require no justification, because subjectivism implies that they do have a justification—themselves. But beliefs can never justify themselves: it is never a justification for a belief to report that one has the belief. The justification must be something logically separate and not identical to the belief itself. Thus moral subjectivism is self-defeating.

 

Share

Moral Minimalism

                                               

 

 

Moral Minimalism

 

 

I shall explore the prospects for a minimalist theory of normative ethics. By “minimalist” I mean a theory (analogous to minimalism in linguistics) that seeks to base normative ethics on the most exiguous of foundations, viz. a single moral principle, with other aspects of the ethical life consigned to something extraneous to morality strictly conceived. The moral principle in question is exceedingly familiar: DO NO HARM. That is all that morality contains, according to the minimalism I envisage, neither more nor less. The only moral principle is the injunction not to do harm. Usually this principle is included in a total utilitarian package: Do no harm and maximize wellbeing (welfare, the good, happiness, pleasure). I propose to drop the second conjunct so that morality only prescribes the avoidance of harm. Clearly the two conjuncts are logically independent, though the second is generally taken to include the first: if our aim is to maximize wellbeing, it should surely include minimizing harm. But we may live in a possible world in which there is no harm to be undone or produced, yet still we are subject to an injunction to maximize wellbeing—we must increase the level of wellbeing even if there is no suffering to be eliminated and none that can be produced (this is a world of harm-proof people). More obviously, one could accept the injunction not to harm while rejecting the injunction to promote wellbeing: I mustn’t harm anyone, but I have no duty positively to improve anyone’s lot. For example, I must not strike an innocent man for no reason, but I am under no obligation to make him happier than he already is. So I propose dropping the second injunction while insisting on the first. I call this position “disutilitarianism” because it emphasizes the avoidance of disutility not the production of utility. It is a negative prohibition: it says what we must not do not what we must do. We must not cause harm, though we have no duty to cause its opposite (if it has a real opposite)—we have no duty to maximize the general good, or even to produce it in a particular case. There is a duty against maleficence, but no duty of beneficence.

            Let me immediately address a natural objection, namely that it is clearly morally praiseworthy to promote the good. I don’t disagree, though there are notorious cases in which promoting the good is not the morally right thing to do (the bane of utilitarianism); but I would distinguish between what morality requires and what it is admirable to do. It certainly shows the virtue of generosity to help the poor and needy, but that is not the same as saying that this is a moral duty. It may just be supererogatory. We have a duty not to harm, but we have no comparable duty to make people happier—though it might be virtuous so to do. I will come back this point, noting now only that moral minimalism does not preclude acting virtuously in promoting wellbeing; it claims only that this is not part of morality in the strict sense. We might even say that not causing harm isn’t a virtue at all, being merely our most basic moral obligation—there is nothing virtuous in declining to strike an innocent man for no reason. Duty and virtue are separate domains.

            A main reason for advocating moral minimalism as against full-blown utilitarianism is that the stronger doctrine runs into well-known problems. I won’t rehearse these problems, but they concern considerations of justice and the problem of moral inflation, whereby we turn out to be the moral equivalent of murderers by not helping starving people in distant lands to the point of self-impoverishment. What is crucial, I think, is that there is a deep asymmetry between harming and benefitting: we have an absolute duty not to do the former, but the latter is optional. Partly this is because of the difference between pain and suffering, on the one hand, and happiness and wellbeing, on the other: the former are clearly defined and obviously bad, while the latter are amorphous and not invariably good (e.g. the pleasure-loving happy sadist). The dentist must do his best to avoid hurting you, but he is under no obligation to make you feel happier when you leave his office than you were when you came in—and what exactly would that be? He knows how to avoid harming you, but he may have no idea what would make you happier (a joke, a donation, a pat on the back?). So the harm principle has a different deontic status from the benefit principle. This is of course exactly how we operate in daily life: you avoid stepping on people’s toes as you walk down the street, but you don’t try to cheer everyone up as you pass them by. They will blame you for hurting them, but not for failing to improve their mood. They may think that that is none of your concern, while avoiding crushing their toes indubitably is. So we can say that the harm principle has a greater hold on us than the benefit principle; I propose accordingly that we restrict morality to the harm principle.  [1]

            It is a significant fact that all the standard rules favored by the deontologist can be seen to stem from the rule against causing harm. Breaking promises, lying, stealing, assaulting, murdering, acting unjustly—all involve causing harm to others. These rules are prohibitions designed to minimize suffering, ranging from disappointment to physical agony. None of them reflects the utilitarian’s insistence that we should maximize wellbeing—as if by sitting at home doing nothing we have committed grave evils. Of course, it is possible to harm by omission—and that is equally proscribed by the harm principle. You can fail to save someone from being hit by a car, so that your omission harms him or her. But doing nothing to make people happier is not ipso facto a form of indirect harm. We can’t somehow squeeze beneficence in under non-maleficence. The usual rules of morality concern things we are not to do (“Thou shall’t not…) and they all concern the harms that result from doing these things. Bringing each of these specific rules under the harm principle effects a major simplification, making moral thinking easier to manage and sharper in focus. All we really need to remember—all we need to know—is that it is wrong to cause harm. Whenever you are faced by a difficult moral choice you need only ask yourself what action will cause the least harm and then do that. For instance, you should not break a promise to meet A because meeting B instead will increase the total level of happiness in the world; you should avoid harming A by leaving him hanging (maybe suggesting to B that she finds something else to do). It is no small advantage to morality that it should be codified in a single easily remembered slogan. Children need to be instructed in it, and many adults have no aptitude for moral complexity, so keep it simple.

            Can you harm someone in order to benefit him later? If so, there is no absolute ban on causing harm. Here we need to distinguish two cases: causing harm now to prevent greater harm later, and causing harm now in order to increase happiness later. The dentist drills the tooth now in order to prevent the pain of later toothache, so she is minimizing pain in the long run: that is morally acceptable and in accordance with the harm principle. But it is another thing entirely to try to justify causing harm now by citing future benefits that don’t involve harm minimization—as it might be, applying the rod to the child in the expectation that she will grow to be happier than she would be otherwise. This is far from obviously acceptable and it gains no support from the harm principle, which speaks only of minimizing harm not maximizing happiness. Omitting to do something harmful today can cause greater harm tomorrow, and is therefore morally proscribed; but omitting to do something harmful today that will result in less overall happiness in the future is not to be morally condemned (except by the rigid utilitarian). Even if beating children is known to make them happier in later life, that is no ground for beating them—though if it will prevent them from excruciating suffering later, then it should be done (however reluctantly).  We must always seek to minimize harm, even if harm is necessary to bring that about; but harm can’t be justified by considerations of overall utility, as if pain now is made up for by elation later (as opposed to mere contentment).

            It is important to minimalism to distinguish between what it is good for a person to do from what it is morally obligatory for a person to do. Minimalism is only a theory of the latter; it is neutral on the broader question of virtuous or admirable conduct. Living a good life includes acting generously and kindly, even if no harm is reduced thereby. That may seem to leave a lot of moral life outside the scope of the minimalist theory, but in fact it covers more than might be supposed. For much generosity and kindness involve the avoidance of suffering not merely the production of utility. You can harm someone by not being concerned about his or her welfare, as when you callously decline to give food to a starving person. But not all generosity is like that, as with the generous host: she is not avoiding harming her guests by laying on a great feast, but rather adding to her guests’ enjoyment. That is what is not morally required—increasing other people’s happiness. By not voting for tax increases to help the poor you may be harming them indirectly and by omission, so this falls under moral criticism; what does not invite moral criticism is declining to share your resources with people already amply resourced. So quite a lot falls under the prohibition against causing harm, not merely refraining from attacking people directly (animals too). Someone might be exceptionally generous with his friends, by always treating them to fancy dinners and the like; that may be commendable, but it is not morally obligatory. This is a distinction well worth preserving, and it is a virtue of minimalism that it makes the distinction firmly (unlike classical utilitarianism). Much virtuous behavior is discretionary, but moral behavior never is—it is strictly obligatory. Being a miser may not be admirable, but it is in a different category from being a sadist. The paradigm of the immoral act is maiming someone, not providing a thrifty meal instead of a lavish one.

            Is the anti-harm theory deontological or consequentialist? You can take it either way, either as a moral rule or as a statement about consequences. That is, you can say that an action is right if and only if it actually minimizes harm, or you can say that the agent must always intend to minimize harm and that this is what makes it right not the actual consequences. I prefer to think of it as an absolute general rule with a number of sub-rules as special cases (such as “Don’t break promises”), but clearly the consequences are crucial in justifying the rule—pain and suffering being bad things in themselves.

            I would emphasize the formal merits of the minimalist theory. It is simple, clear, manageable, and practicable. It is intuitively compelling and scarcely controversial in its recommendations (unlike utilitarianism). Its only questionable claim is that there is nothing more to morality than what it includes; but this is mitigated by the distinction between morality proper and what counts as virtuous conduct. It combines the best of deontology and consequentialism. It is what you would expect of a moral system that is designed to help people live together in close proximity. It is non-paternalist. It doesn’t seek to meddle in other people’s lives, as the prescription to make everyone as happy as possible does.  It has a pleasing homogeneity. It is readily universalized. It does not attempt to combine disparate ideas (as in W.D. Ross’s mixed theory). It is easily teachable. It does not call for extremes of altruism and intolerable guilt over never doing enough. It takes what is good in utilitarianism and discards what is bad. The disutilitarian is a realistic, clear-eyed, compassionate, commonsense type of fellow, mainly concerned to prevent pain and suffering. Everything else is icing on the cake. If he can prevent us from harming each other (animals included), he thinks he has done his moral duty. What we choose positively to do, as a matter of personal virtue, is our own affair and of no concern to morality as such.  [2]

 

Colin McGinn

  [1] A further asymmetry is this: the harm principle applies impartially to intimates and strangers, but the benefit principle applies differentially according to personal distance (at least according to common morality). You must not harm anyone equally, but it is morally permissible to benefit members of your own family over others. This suggests that the harm principle is part of non-negotiable moral law, while the benefit principle operates according to personal discretion. 

  [2] The disutilitarian might well contend (echoing Nietzsche) that morality since the advent of Christianity has indulged in a kind of duty-creep whereby virtuous behavior has been converted into a species of strict moral duty. Thus Jesus urges us to give to the poor and needy (defined relatively) and his followers have interpreted this as an extension of our moral duties. But that is not necessarily the right way to interpret the words of Jesus: he is not assimilating charity to the deontic level of non-violence, merely suggesting that we cultivate the virtue of generosity and not content ourselves with the mere observance of our strict moral duty. Perhaps under the influence of Christian ethics, as it came to develop, utilitarian ethics made a virtue of blurring the line between moral duty and personal virtue, thus assimilating the demerit of not being charitable with the demerit of violently assaulting people. That was a conceptual error and one the minimalist is anxious to remedy.   

Share

Moral Excess

                                                            Moral Excess

 

 

God is said to be morally perfect. According to one interpretation, this means that God seeks to maximize the good—he is committed to making this the best of all possible worlds. Of course, that does not appear to be the case (pace Leibniz), thus producing the problem of evil. I want to put that problem aside and focus on the definition of moral perfection as maximizing the good. What does it mean exactly?

            Suppose that the basic goods are of three kinds: happiness, knowledge, and aesthetic appreciation. Then God’s obligation is held to be maximizing these goods, making sure they could not be improved upon. People must be happy, knowledgeable, and aesthetically appreciative. That sounds reasonable, but how far must God go to ensure that these goods are maximized? Suppose Anne is a happy person by any normal standards; however, she occasionally has a distressing thought or a feeling of mild remorse. Since she is not maximally happy, God regards it as his duty to step in and improve her state of mind, blocking such thoughts and feelings. Put aside issues of interference and paternalism: do we think that God is under any obligation to improve Anne’s mood in these ways? Are you under any such obligation with respect to people you know? Must every discomfort be removed, every desire sated, every thought be made a happy thought? Surely not: that would be morally excessive. Should you feel guilty about not doing everything you can to remove every hint or smidgen of unhappiness from the world? No–and neither should God feel obliged to maximize happiness to such a degree.

            Or consider knowledge: should that be maximized? Suppose Jean is a very knowledgeable person, well versed in history, science, philosophy, and so on. We would normally think that we have no educational duties with respect to Jean. But Jean doesn’t know everything about everything; there is a lot that she doesn’t know. Is God obliged to step in to rectify these lacks, thus maximizing the good of knowledge? Is he letting Jean down if he doesn’t immediately install a full knowledge of botany? Maybe she would value such additional knowledge, but is God failing in his moral duty by not ensuring that Jean knows these extra things? Again, that seems excessive: there is no general duty to maximize knowledge—to make it as extensive as possible.

            Similarly for aesthetic appreciation: must it be maximized? Linda is a keen follower of the arts, cultivated and open-minded, as appreciative as anyone you know. But she doesn’t appreciate everything; she fails, say, to see the point of certain painting styles. That may be a limitation on her part, but the question is whether God has a duty to remedy this lack? If he does nothing, is he under suspicion of non-existence, granted that he is morally perfect by definition? Can God be criticized for not ensuring that Linda appreciates every work of art to the maximum? Surely that would be excessive, even if it would not be excessive for God to ensure that she has some aesthetic appreciation. That is, God has no duty to make Linda into the most esthetically appreciative person conceivable—just as he has no duty to make Anne and Jean into the happiest and most knowledgeable people conceivable. He could achieve these things, but it would be excessive. It looks more like a form of moral obsession than a sensible moral outlook, like making sure not one speck of dirt remains on the kitchen floor.

            What should we conclude from this? The first thing is that there is such a thing as moral excess. This is not the same thing as acting in a supererogatory manner: that is not a form of moral excess, just a commendable wish to go beyond the call of strict duty. Moral excess is a kind of mistake, not a desirable trait. It is a miscalculation about what one should do. This means that any moral theory that recommends such excess is wrong about the nature of obligation and right action. It is just not true that we have an obligation to maximize the good—though we may well have a duty to bring about a certain amount of good. Ideal consequentialism is therefore false. Specifically, we don’t have a duty to rectify trivial suffering, especially of a normal human kind—such as the odd melancholy thought or a minor twinge or a little throb of lust. Nor does anyone have a duty to educate everybody in everything, or work assiduously at improving everyone’s appreciation of art no matter how much of an aesthete they already are. Such general prescriptions are just far too general and onerous; what is needed is a more qualified principle—such as that people should be made moderately happy, fairly well educated, and not devoid of aesthetic appreciation. The other thing we should conclude is that if God is defined using the very strong principle, then God does not exist. I say this not because of the problem of (mild) evil but because moral excess is not an admirable quality, and God must be admirable in every way. If God thought that he was obliged to maximize the good in the very strong way, then I would think he was in error and didn’t understand the nature of moral obligation. But then he would not be a perfect being and hence not be God. Someone who can’t see that extreme pain imposes a duty to help is morally deficient, but someone who thinks that he is obliged to attend to every little discomfort is morally excessive. What if he felt this obligation intensely, berating himself for failures to carry out his moral duty? This is not sainthood but a form of madness. Moral excess is not a way of being moral.

 

Share