Multidimensional (Inclusive) Semantics

                                   

 

 

Multi-Dimensional (Inclusive) Semantics

 

 

I address you today in a spirit of inclusiveness and diversity. For too long semantics (theory of meaning) has been the confine of a single type of entity held to constitute all that meaning encompasses (or a couple of entities, closely related).  We must broaden our horizons and recognize that many kinds of entity contribute to the overall significance of an expression, often emanating from different traditions and regions. Above all it is reference that has proved hegemonic, squeezing out other contenders for semantic acceptance. Whether that notion is phallocentric to boot I shall not venture to say  [1]; what I shall say is that we need a far more inclusive and diversity-driven approach to semantics. Semantics correctly conceived is a rainbow.  [2]

            It used to be that only reference (denotation) was admitted into the semantic club: the meaning of an expression was its denotation. This was the view of Lord Bertrand Russell, English aristocrat and logic whiz (Western logic). Definite descriptions had to be distorted beyond recognition in order to fit them into this narrow picture (a form of linguistic colonialism perhaps). In any case, this approach, hailing from John Stuart Mill, another privileged upper class Englishman (and we duly note the gender) held sway until a rebellious German, a certain Gottlob Frege, added an extra element to the story—what he provocatively labeled sense. This was an improvement, breaking the stranglehold of the English referential aristocrats, but sense was conceived as the mode of presentation of the reference; so reference was still occupying center stage, with sense acting merely as its reflection or image, i.e. how we view reference. (Can we say that while reference is the phallus sense is its codpiece?) Still, the basic monism is firmly in place: semantics remains one-dimensional, or at least one-and-a-half-dimensional. Not till Ludwig Wittgenstein arrived (also a white male aristocrat) was this monism seriously questioned and a certain kind of pluralism put in its place—with all the variety of language emphasized and celebrated. This was a welcome development in the openness of semantic studies, even allowing for the existence of actual workingmen (those builders of the early Philosophical Investigations—though again we must note the gender bias). But instead of embracing diversity the Austrian aristocrat insisted on imposing a new one-dimensional hegemony—all meaning is use. Reference drops out of the picture entirely, as if use has ousted it altogether. We don’t have use and reference but use and not reference. The old exclusiveness survives in a new form, less rigid perhaps, but with the same drive towards uniformity. One half expects the use to be restricted to only the most privileged of users! This entire trajectory then reaches its climax, i.e. nadir, in the person of Sir Michael Dummett, a white male Oxford philosopher, whose main mantra is that everything about meaning should be explained by one central concept—such as truth or verification. There could not be a more blatant hegemony! Nothing is to be included in meaning except what can be subsumed under a single conceptual category: you are welcome to join the semantic club, but only if you are properly related to the concept of truth (or verification). No diversity allowed!

            At this point I shall drop the political backstory and proceed immediately to theoretical matters, though I trust my enlightened readers to keep that political context always in mind. And let me lay my cards on the table right away: I am all in for maximum semantic inclusiveness with as much diversity as possible (within reason of course). Not just two-dimensional semantics, or even three or four, but many dimensions, indefinitely many—as many as we can come up with. Fortunately, we have this diversity already lying around—it requires no strenuous inventing on our part. I have prepared a long list: reference, sense (mode of presentation), tone, character and content, intension and extension, grammatical mood, inferential role, rules, stereotype, mental image, individual and social understanding, ideas, brain states, use, conceptual analysis, truth conditions, criteria, causal chains, and whatever else comes to mind. For my contention is that all of these may be reckoned to the meaning of a word or sentence: not one of them and not the others, but the whole lot. They don’t exclude each other but coexist peacefully. For example, a proper name, say “Aristotle”, has reference, sense, an intension and extension, a character (constant in all contexts), a role in inference, an associated stereotype (“bearded cogitating Greek man”), individual grasp and socially agreed grasp, a use, a contribution to truth conditions, criteria of application (see stereotype), a causal-historical chain, even a tone (vaguely distinguished and admirable). From among this variegated list we may pick out sense and intension for instructive contrast: the former is defined in epistemic terms (mode of presentation and interchangeability in belief contexts) while the latter is defined in modal terms (functions from worlds to extensions). These are by no means the same notion, but they equally belong to a single name, existing side by side in perfect harmony. There is no point in arguing that one is the real meaning and the other a mere impostor: both belong to the overall semantic significance of the name. Both are attributes the name has, and they clearly flow from what it means (not what it sounds like). Meaning is multi-dimensional, diverse, and inclusive. No doubt there are interesting relations of dependency between these various elements, which may be studied, but the plurality is irreducible—part of meaning’s rich pageant. We can even throw in some Meinong-style ontology if that is to our taste, assigning to so-called empty names a subsistent entity as reference, or what is called an ”intentional object”. A committed Kantian might insist that reference be divided into phenomenal reference and noumenal reference. A follower of Sir Arthur Eddington might propose a double reference for “chair”: the commonsense chair and the chair of physics. The possibilities are endless, to be considered on their merits; but they should not be rejected simply because of some presumed one-dimensionality in meaning. In the theory of meaning our adage should be, “The more the merrier”. Plurality is a sign that we have not omitted anything not a symptom of conceptual chaos or indecision.

            It may be remarked that the situation in other departments of linguistic theory is already happily pluralist. Consider the theory of syntax, taken to include the study of the sound system of a language. There is no one central concept here to which others must bow down; instead there are layers and dimensions. We can study speech as an acoustic phenomenon (as with a speech spectrograph), or as an articulatory system, or as embodied in the brain, or computationally. None of these competes with the others; all are legitimate and important. Syntax more narrowly conceived is typically understood as consisting of layers of rules, which may be viewed computationally or in terms of brain mechanisms. These are all aspects of the “formal” properties of language, and they all coexist—people don’t go around complaining that someone else’s pet theory isn’t really about syntax. Syntax isn’t one-dimensional. Similarly, in pragmatics there is room for a diversity of perspectives—not a single overarching concept. Thus there is no inconsistency between Gricean, Austinian, and Wittgensteinian approaches to (philosophical) pragmatics: all can be true and illuminating in their different ways. After all, there are many aspects to the employment of language by people, and we should not expect to be able to subsume all them neatly under a single heading. For example, an utterance of “Shut the door!” may be made with Gricean intentions, while having an Austinian perlocutionary effect, and occurring within a Wittgensteinian language game. Then too, we may approach pragmatics from an individual’s perspective, studying the way language is used as a tool of thought (say), or we can approach it socially, studying how language is used in interpersonal communication. There are indefinitely many possible ways to do pragmatics, as there are multiple ways to do syntax; and there is no reason semantics should be an exception. There are multiple components across the board. The fact is that the list of concepts I gave represents a variety of insights into meaning on the part of different thinkers, each valuable in its own way, and there no necessity to reject some in favor of others. I don’t mean to say that no semantic theory can conceivably be false, just that the fault is usually incompleteness not outright error. Apparent inconsistencies often melt under more tolerant investigation (as with Fregean versus Kaplanian approaches to indexicals). I used to be all in favor of “dual component” semantics, but really we should expand the dimensions dramatically to accommodate everything that characterizes meaning. The concept of meaning is a multi-dimensional concept incorporating a large variety of factors. It is not a simple thing like being square or red; it is more like the concepts of democracy or marriage or success. It contains multitudes.

            Let me return to my political platform, because I was not being entirely frivolous (though mainly so). In ethics there has historically been a tendency towards monolithic theories, as with utilitarianism and Kantian ethics. It was left to more ecumenical ethicists like W.D. Ross to advocate a pluralist reconciliation between these apparently competing systems, thus producing a multi-dimensional ethical system. It is easy to see this development as an integration of different political perspectives—the pure will of the privileged autonomous agent versus the maximization of happiness in a suffering population. In the case of semantics we also have a politically contested domain, because language is spoken by diverse groups of people each with their purposes, positions, and ways of life. It would not be amazing if a certain kind of linguistic hegemony were in effect according to which only certain aspects of meaning are deemed “proper”, the rest consigned to illegitimacy and disdain. Hence we get the idea of the logically perfect language. The messy reality of meaning might not receive its due recognition because of an ingrained habit of favoring some things over others. There is always something evaluative in theories of meaning, as if only a certain dimension is deserving of respect. Why has tone not received the attention it deserves? Could it be that its prime examples are racial slurs and sexist language? Why would people want to explore the expression of their own prejudices and hostilities? Speaking very broadly, there is something democratic about meaning: everyone speaks no matter his or her social class or place in society, and meaning itself combines disparate elements jostling together. Oversimplifying culture from political motives is not so far removed from oversimplifying language from similar motives. The habit of exclusivity is deeply rooted and ubiquitous. At the least it can operate as a factor in determining what theoretical options people tend to take seriously. Semantics is political too.  [3]

 

Colin McGinn              

           

  [1] I have no wish to wax psychoanalytic, but isn’t the notion of reference suspiciously phallic (at least as phallic as some of Freud’s phallic symbols)? It seems to involve a kind of mental protrusion, as the act of reference extends outward to make contact with objects in the environment. People sometimes talk of reference as like tentacles reaching out to grasp, but other organs of the body can reach out and make contact too. And what about pointing? The pointing finger has a rigidity and angle not unlike… And then there is “rigid designation”, a phrase that trips suspiciously easily off the tongue. Just saying.

  [2] Light can appear homogeneous, but the rainbow resolves it into an array of separable hues. Meaning can seem homogeneous too until we resolve it into its components.

  [3] For all I know intellectual traditions from beyond the West have suggested aspects of meaning Western thinkers have missed. If so, I cordially invite them in.  

Share

Muddy Waters

                                               

 

 

 

Muddy Waters

 

 

Causation is one of those philosophical topics that drive you up the wall. As soon as you start to think about it you draw a complete blank. As Hume observes: “There are no ideas, which occur in metaphysics, more obscure or uncertain, than those of power, force, energy, or necessary connexion” (Enquiry, p.45).  [1] The cement of the universe is so much muddy water. Of course philosophers have done everything in their power to hide this fact from themselves, even going so far as to try to reduce causality to mere regularity. Hume’s own view was that causation is real but incomprehensible (by us). It is neither an affair of the senses nor of reason: we have no sensory impression of necessary connection (which is definitive of causation) but neither is causation grasped by reason (like logic, arithmetic, and geometry). It is a real relation between things but it is not revealed by perceiving them or by merely thinking about them. It fits neither empiricism nor rationalism. It sits uncomfortably between the two, awkwardly and inscrutably. No matter how much you gaze at an object or reflect on it you will never discover causation (as it exists in that object). But perception and reason are our only faculties of knowledge, so the mind draws a blank on the nature of causation. Yet we constantly refer to it, rely on it, and assume its reality. Evidently we can know that it obtains, relying on the observation of regularities of nature, but we can’t fathom its inner nature—or even fathom our lack of fathoming.

            The problem concerns not just particular causal relations but also the notions of law and power (disposition, capacity, potential). Objects fall under causal laws and have causal powers: this is why they have the effects they have. But laws and powers are at least as inscrutable as causal relations between particulars; this is an interconnected knot of problems. Perhaps the notion of power concentrates the problem most acutely (as Hume intimates): how are the potential effects of a cause contained in it? Are the effects somehow already present in the cause? Does the cause “refer” to the effects? Are there shadows or signs of the effects lurking silently in the cause? But you can’t discern anything like this if you examine the cause, even going down to its elementary constituents. When a moving object imparts motion to another object by collision is the other object’s motion somehow prefigured in the moving object? It had the potential of creating that effect, so doesn’t it already contain it in someway? What is potential? It’s a bit like the way the meaning of a word “contains” its uses: they are implicit in it, packed into it—but what does this way of speaking amount to?  [2] The problem of causation is how an object can contain what it does not contain. If we think of a cause as a conscious being for a moment, it is as if it knows what effects it can bring about, but only unconsciously: it doesn’t have these effects before its consciousness, but it is subliminally aware of them—they are implicitly known not explicitly known. But that looks like a dodge: they aren’t anticipated in any way that we can discern—the mind of a cause is blank about its future effects. Yet it has the power to bring precisely these effects about, and this power is internal to it, so… Thus the waters fill with mud.

            What can we say positively about causation? The logic, semantics, and conceptual analysis of “cause” are not so baffling. Thus “x caused y” expresses a relation that is irreflexive, asymmetric, and transitive: nothing can cause itself, effects can’t cause causes, and the effects of effects are caused by the initial cause. Semantically, it is plausible to suggest that “cause” generates a transparent context and expresses a relation between events (though this view is not without its critics). The concept may also be analyzable in terms of counterfactuals or other necessary and sufficient conditions. So it is not that we can say nothing about the word and what it means—and much philosophical energy has been expended on these worthy tasks. But they don’t touch the underlying metaphysical and epistemological questions, the ones so memorably raised by Hume. What exactly is causation, and how do we know about it? Specifically, what is it to have a causal power, and how can we know causal powers? There are suggestions—there are always suggestions. One suggestion is that a causal power is identical to a structural property of the object, as it might be molecular structure. But this just pushes the question back: how is the power present in the structure? Isn’t it as invisible as ever? Nor can it be excogitated by pure reason. It can’t be seen and it can’t be deduced—it flouts both empiricist and rationalist epistemology. It isn’t a posteriori and it isn’t a priori. It is a peculiar kind of fact, being neither perceptible (even by extended perception: microscopes etc.) nor rationally apprehended. As Hume would say, it is neither a “matter of fact” nor a “relation of ideas”; it hovers ambiguously between the two. No wonder there has been a marked tendency towards elimination: causation must either be reduced to facts less problematic (regularities, dispositions to project) or eliminated outright. To accept it as it is runs into insurmountable metaphysical and epistemological difficulties. Indeed, it threatens to bring down the most fundamental structures of philosophical thought.

            So why not just bring them down? Because we have nothing to put in their place, that’s why. It is not as if we have some other way to think about causation that we can substitute for the old inadequate dichotomies: the waters are thick with mud and our vision fails us. It is really a horrible problem. Best not even to go near it; just leave it alone to fester. But maybe we can articulate the problem better, gain a better sense of its dimensions and density. Can we at least pinpoint why it is so difficult? It is in some ways worse than the problem of consciousness, because in that case at least we know what we are talking about—we don’t just refer to consciousness, we experience it. But we don’t experience causation (as opposed to its symptoms), despite our readiness to refer to it. We refer to a we-know-not-what. Appealing to mental causation won’t help, despite our immediate acquaintance with the mental phenomena between which causation holds: for mental causation is as opaque as physical causation (as Hume noted). We can say that physical causation is no less active than mental causation; the will is not somehow a livelier form of causation. Nor is causation by physical contact more transparent than causation-at-a-distance, since its operation is as obscure there as it is in the remote case. In this respect old-style mechanism offers an illusory paradigm of transparency (this was Hume’s central insight, in effect): it isn’t that causation by contact is quite clearly grasped while causation-at-a-distance must be deemed “occult”. Neither is really intelligible to us, not when you get right down to it. Hume’s billiard balls hit each other, unlike orbiting planets, but their causal powers are no more evident to sense or reason than gravity. For some people this was taken as a reason to eliminate causation altogether from physics, and one can appreciate the motivation.

            Can we be more constructive? I think we can say two positive things, though nervously. The first is that nature must be more tightly interlinked than we tend to suppose going by the appearances: causation connects things because any effect of a cause must be somehow written into the cause (though not in a way we clearly conceive). The colliding billiard balls don’t appear to sense perception as having any intelligible connection; nor can human reason discern any such connection: but they must somehow be intelligibly linked. The laws of nature essentially relate separate things, because causal powers are essentially powers to bring about certain specific effects: an object x has the power to make an object y have a property P. There is thus more “holism” at work in nature than is apparent to our epistemic faculties. We could introduce the idea of the “causal boundary” of an object to signify the class of objects that fall within its causal reach—for instance, the class of solids a given liquid can dissolve. This class falls within its causal boundary but not its spatial boundary. Then nature will be said to consist of the totality of such causal boundaries—these are the true units of nature.

The second thing we can say is that whatever causal powers are they must be very different from their manifestations in observable phenomena. This is because the manifestations never add up to a causal power, as it exists in objects. It can’t be mere regularity and it can’t be a “categorical base” (e.g. molecular structure)—these are not what the power is or else we would know what it is. Powers must be as different from their manifestations as mental states are from behavior—perhaps more so. Potentiality must be different from actuality; yet the two must be intimately related. I can’t tell you how potentiality differs from actuality because of its obscurity, but it evidently does differ, dramatically so. (Or else actuality is merely the way potentiality looks to our senses and doesn’t go deep ontologically speaking.  [3])

            Would other things become clearer if we had a better grip on causation? Anything in which causation directly figures would be—laws of nature, the origin of the universe, the operation of fundamental particles. But it might also help with the mind-body problem and the free will problem: How is the mind caused by the brain? How are free actions caused? Certainly this is an enormous gap in our understanding of nature, what with the ubiquity of causation, but a gap there seems little prospect of filling. The water may remain forever muddy.  [4]

 

  [1] I will be accepting the “skeptical realist” view of Hume in what follows, according to which causal powers are real existences that defy our limited understanding, not the positivist interpretation of Hume according to which the concept of causal power is incoherent and should be rejected.

  [2] I am alluding to Wittgenstein’s discussion of meaning and use in Philosophical Investigations. He explicitly connects this issue to that of causal powers in the sections on machines (193, 194).

  [3] Such a view is suggested by the thesis that all properties consist of causal powers. Then there is nothing to nature but powers.

  [4] This essay is intended to reflect the inadequacy of our understanding of causation, containing little in the way of genuine illumination. But perhaps it serves to scratch the depths.

Share

Moral Subjectivism Defeated

                                    Moral Subjectivism Defeated

 

 

 

Moral subjectivism claims that what we think of as moral values reduce to moral beliefs: things are wrong because we believe they are wrong. It is not that we have moral beliefs because of moral facts, which may be cited to justify the belief; rather, the so-called moral facts are just our moral beliefs. To say that murder is wrong is to say that we believe it is wrong. In the case of a solitary individual the values he accepts are simply what his value beliefs happen to be. There cannot be any divergence between moral facts and moral beliefs, since moral facts are moral beliefs. But this position faces the following question: how does the moral agent set about justifying his moral beliefs? If you ask a moral objectivist what justifies his moral beliefs, he will answer by citing a moral fact—say, that murder is wrong. But if you ask a subjectivist the same question, he has no resource other than to say that believes a moral proposition. He believes it because…he believes it. But that is no justification: a belief cannot be justified by itself. It must appeal to something other than itself as justification—it can’t be its own reason. So a consistent subjectivist has to abandon moral beliefs, perhaps leaving only “gut feelings” that don’t call for justification. This doesn’t imply that objectivism is true, only that it must be taken as true even by an avowed subjectivist—unless moral beliefs are abandoned. Moral belief logically requires commitment to moral objectivism, i.e. the denial that values reduce to beliefs. There must be more to morality than moral beliefs on pain of excluding justification for those beliefs, and hence abandoning the beliefs. Don’t say the beliefs are basic and require no justification, because subjectivism implies that they do have a justification—themselves. But beliefs can never justify themselves: it is never a justification for a belief to report that one has the belief. The justification must be something logically separate and not identical to the belief itself. Thus moral subjectivism is self-defeating.

 

Share

Moral Minimalism

                                               

 

 

Moral Minimalism

 

 

I shall explore the prospects for a minimalist theory of normative ethics. By “minimalist” I mean a theory (analogous to minimalism in linguistics) that seeks to base normative ethics on the most exiguous of foundations, viz. a single moral principle, with other aspects of the ethical life consigned to something extraneous to morality strictly conceived. The moral principle in question is exceedingly familiar: DO NO HARM. That is all that morality contains, according to the minimalism I envisage, neither more nor less. The only moral principle is the injunction not to do harm. Usually this principle is included in a total utilitarian package: Do no harm and maximize wellbeing (welfare, the good, happiness, pleasure). I propose to drop the second conjunct so that morality only prescribes the avoidance of harm. Clearly the two conjuncts are logically independent, though the second is generally taken to include the first: if our aim is to maximize wellbeing, it should surely include minimizing harm. But we may live in a possible world in which there is no harm to be undone or produced, yet still we are subject to an injunction to maximize wellbeing—we must increase the level of wellbeing even if there is no suffering to be eliminated and none that can be produced (this is a world of harm-proof people). More obviously, one could accept the injunction not to harm while rejecting the injunction to promote wellbeing: I mustn’t harm anyone, but I have no duty positively to improve anyone’s lot. For example, I must not strike an innocent man for no reason, but I am under no obligation to make him happier than he already is. So I propose dropping the second injunction while insisting on the first. I call this position “disutilitarianism” because it emphasizes the avoidance of disutility not the production of utility. It is a negative prohibition: it says what we must not do not what we must do. We must not cause harm, though we have no duty to cause its opposite (if it has a real opposite)—we have no duty to maximize the general good, or even to produce it in a particular case. There is a duty against maleficence, but no duty of beneficence.

            Let me immediately address a natural objection, namely that it is clearly morally praiseworthy to promote the good. I don’t disagree, though there are notorious cases in which promoting the good is not the morally right thing to do (the bane of utilitarianism); but I would distinguish between what morality requires and what it is admirable to do. It certainly shows the virtue of generosity to help the poor and needy, but that is not the same as saying that this is a moral duty. It may just be supererogatory. We have a duty not to harm, but we have no comparable duty to make people happier—though it might be virtuous so to do. I will come back this point, noting now only that moral minimalism does not preclude acting virtuously in promoting wellbeing; it claims only that this is not part of morality in the strict sense. We might even say that not causing harm isn’t a virtue at all, being merely our most basic moral obligation—there is nothing virtuous in declining to strike an innocent man for no reason. Duty and virtue are separate domains.

            A main reason for advocating moral minimalism as against full-blown utilitarianism is that the stronger doctrine runs into well-known problems. I won’t rehearse these problems, but they concern considerations of justice and the problem of moral inflation, whereby we turn out to be the moral equivalent of murderers by not helping starving people in distant lands to the point of self-impoverishment. What is crucial, I think, is that there is a deep asymmetry between harming and benefitting: we have an absolute duty not to do the former, but the latter is optional. Partly this is because of the difference between pain and suffering, on the one hand, and happiness and wellbeing, on the other: the former are clearly defined and obviously bad, while the latter are amorphous and not invariably good (e.g. the pleasure-loving happy sadist). The dentist must do his best to avoid hurting you, but he is under no obligation to make you feel happier when you leave his office than you were when you came in—and what exactly would that be? He knows how to avoid harming you, but he may have no idea what would make you happier (a joke, a donation, a pat on the back?). So the harm principle has a different deontic status from the benefit principle. This is of course exactly how we operate in daily life: you avoid stepping on people’s toes as you walk down the street, but you don’t try to cheer everyone up as you pass them by. They will blame you for hurting them, but not for failing to improve their mood. They may think that that is none of your concern, while avoiding crushing their toes indubitably is. So we can say that the harm principle has a greater hold on us than the benefit principle; I propose accordingly that we restrict morality to the harm principle.  [1]

            It is a significant fact that all the standard rules favored by the deontologist can be seen to stem from the rule against causing harm. Breaking promises, lying, stealing, assaulting, murdering, acting unjustly—all involve causing harm to others. These rules are prohibitions designed to minimize suffering, ranging from disappointment to physical agony. None of them reflects the utilitarian’s insistence that we should maximize wellbeing—as if by sitting at home doing nothing we have committed grave evils. Of course, it is possible to harm by omission—and that is equally proscribed by the harm principle. You can fail to save someone from being hit by a car, so that your omission harms him or her. But doing nothing to make people happier is not ipso facto a form of indirect harm. We can’t somehow squeeze beneficence in under non-maleficence. The usual rules of morality concern things we are not to do (“Thou shall’t not…) and they all concern the harms that result from doing these things. Bringing each of these specific rules under the harm principle effects a major simplification, making moral thinking easier to manage and sharper in focus. All we really need to remember—all we need to know—is that it is wrong to cause harm. Whenever you are faced by a difficult moral choice you need only ask yourself what action will cause the least harm and then do that. For instance, you should not break a promise to meet A because meeting B instead will increase the total level of happiness in the world; you should avoid harming A by leaving him hanging (maybe suggesting to B that she finds something else to do). It is no small advantage to morality that it should be codified in a single easily remembered slogan. Children need to be instructed in it, and many adults have no aptitude for moral complexity, so keep it simple.

            Can you harm someone in order to benefit him later? If so, there is no absolute ban on causing harm. Here we need to distinguish two cases: causing harm now to prevent greater harm later, and causing harm now in order to increase happiness later. The dentist drills the tooth now in order to prevent the pain of later toothache, so she is minimizing pain in the long run: that is morally acceptable and in accordance with the harm principle. But it is another thing entirely to try to justify causing harm now by citing future benefits that don’t involve harm minimization—as it might be, applying the rod to the child in the expectation that she will grow to be happier than she would be otherwise. This is far from obviously acceptable and it gains no support from the harm principle, which speaks only of minimizing harm not maximizing happiness. Omitting to do something harmful today can cause greater harm tomorrow, and is therefore morally proscribed; but omitting to do something harmful today that will result in less overall happiness in the future is not to be morally condemned (except by the rigid utilitarian). Even if beating children is known to make them happier in later life, that is no ground for beating them—though if it will prevent them from excruciating suffering later, then it should be done (however reluctantly).  We must always seek to minimize harm, even if harm is necessary to bring that about; but harm can’t be justified by considerations of overall utility, as if pain now is made up for by elation later (as opposed to mere contentment).

            It is important to minimalism to distinguish between what it is good for a person to do from what it is morally obligatory for a person to do. Minimalism is only a theory of the latter; it is neutral on the broader question of virtuous or admirable conduct. Living a good life includes acting generously and kindly, even if no harm is reduced thereby. That may seem to leave a lot of moral life outside the scope of the minimalist theory, but in fact it covers more than might be supposed. For much generosity and kindness involve the avoidance of suffering not merely the production of utility. You can harm someone by not being concerned about his or her welfare, as when you callously decline to give food to a starving person. But not all generosity is like that, as with the generous host: she is not avoiding harming her guests by laying on a great feast, but rather adding to her guests’ enjoyment. That is what is not morally required—increasing other people’s happiness. By not voting for tax increases to help the poor you may be harming them indirectly and by omission, so this falls under moral criticism; what does not invite moral criticism is declining to share your resources with people already amply resourced. So quite a lot falls under the prohibition against causing harm, not merely refraining from attacking people directly (animals too). Someone might be exceptionally generous with his friends, by always treating them to fancy dinners and the like; that may be commendable, but it is not morally obligatory. This is a distinction well worth preserving, and it is a virtue of minimalism that it makes the distinction firmly (unlike classical utilitarianism). Much virtuous behavior is discretionary, but moral behavior never is—it is strictly obligatory. Being a miser may not be admirable, but it is in a different category from being a sadist. The paradigm of the immoral act is maiming someone, not providing a thrifty meal instead of a lavish one.

            Is the anti-harm theory deontological or consequentialist? You can take it either way, either as a moral rule or as a statement about consequences. That is, you can say that an action is right if and only if it actually minimizes harm, or you can say that the agent must always intend to minimize harm and that this is what makes it right not the actual consequences. I prefer to think of it as an absolute general rule with a number of sub-rules as special cases (such as “Don’t break promises”), but clearly the consequences are crucial in justifying the rule—pain and suffering being bad things in themselves.

            I would emphasize the formal merits of the minimalist theory. It is simple, clear, manageable, and practicable. It is intuitively compelling and scarcely controversial in its recommendations (unlike utilitarianism). Its only questionable claim is that there is nothing more to morality than what it includes; but this is mitigated by the distinction between morality proper and what counts as virtuous conduct. It combines the best of deontology and consequentialism. It is what you would expect of a moral system that is designed to help people live together in close proximity. It is non-paternalist. It doesn’t seek to meddle in other people’s lives, as the prescription to make everyone as happy as possible does.  It has a pleasing homogeneity. It is readily universalized. It does not attempt to combine disparate ideas (as in W.D. Ross’s mixed theory). It is easily teachable. It does not call for extremes of altruism and intolerable guilt over never doing enough. It takes what is good in utilitarianism and discards what is bad. The disutilitarian is a realistic, clear-eyed, compassionate, commonsense type of fellow, mainly concerned to prevent pain and suffering. Everything else is icing on the cake. If he can prevent us from harming each other (animals included), he thinks he has done his moral duty. What we choose positively to do, as a matter of personal virtue, is our own affair and of no concern to morality as such.  [2]

 

Colin McGinn

  [1] A further asymmetry is this: the harm principle applies impartially to intimates and strangers, but the benefit principle applies differentially according to personal distance (at least according to common morality). You must not harm anyone equally, but it is morally permissible to benefit members of your own family over others. This suggests that the harm principle is part of non-negotiable moral law, while the benefit principle operates according to personal discretion. 

  [2] The disutilitarian might well contend (echoing Nietzsche) that morality since the advent of Christianity has indulged in a kind of duty-creep whereby virtuous behavior has been converted into a species of strict moral duty. Thus Jesus urges us to give to the poor and needy (defined relatively) and his followers have interpreted this as an extension of our moral duties. But that is not necessarily the right way to interpret the words of Jesus: he is not assimilating charity to the deontic level of non-violence, merely suggesting that we cultivate the virtue of generosity and not content ourselves with the mere observance of our strict moral duty. Perhaps under the influence of Christian ethics, as it came to develop, utilitarian ethics made a virtue of blurring the line between moral duty and personal virtue, thus assimilating the demerit of not being charitable with the demerit of violently assaulting people. That was a conceptual error and one the minimalist is anxious to remedy.   

Share

Moral Excess

                                                            Moral Excess

 

 

God is said to be morally perfect. According to one interpretation, this means that God seeks to maximize the good—he is committed to making this the best of all possible worlds. Of course, that does not appear to be the case (pace Leibniz), thus producing the problem of evil. I want to put that problem aside and focus on the definition of moral perfection as maximizing the good. What does it mean exactly?

            Suppose that the basic goods are of three kinds: happiness, knowledge, and aesthetic appreciation. Then God’s obligation is held to be maximizing these goods, making sure they could not be improved upon. People must be happy, knowledgeable, and aesthetically appreciative. That sounds reasonable, but how far must God go to ensure that these goods are maximized? Suppose Anne is a happy person by any normal standards; however, she occasionally has a distressing thought or a feeling of mild remorse. Since she is not maximally happy, God regards it as his duty to step in and improve her state of mind, blocking such thoughts and feelings. Put aside issues of interference and paternalism: do we think that God is under any obligation to improve Anne’s mood in these ways? Are you under any such obligation with respect to people you know? Must every discomfort be removed, every desire sated, every thought be made a happy thought? Surely not: that would be morally excessive. Should you feel guilty about not doing everything you can to remove every hint or smidgen of unhappiness from the world? No–and neither should God feel obliged to maximize happiness to such a degree.

            Or consider knowledge: should that be maximized? Suppose Jean is a very knowledgeable person, well versed in history, science, philosophy, and so on. We would normally think that we have no educational duties with respect to Jean. But Jean doesn’t know everything about everything; there is a lot that she doesn’t know. Is God obliged to step in to rectify these lacks, thus maximizing the good of knowledge? Is he letting Jean down if he doesn’t immediately install a full knowledge of botany? Maybe she would value such additional knowledge, but is God failing in his moral duty by not ensuring that Jean knows these extra things? Again, that seems excessive: there is no general duty to maximize knowledge—to make it as extensive as possible.

            Similarly for aesthetic appreciation: must it be maximized? Linda is a keen follower of the arts, cultivated and open-minded, as appreciative as anyone you know. But she doesn’t appreciate everything; she fails, say, to see the point of certain painting styles. That may be a limitation on her part, but the question is whether God has a duty to remedy this lack? If he does nothing, is he under suspicion of non-existence, granted that he is morally perfect by definition? Can God be criticized for not ensuring that Linda appreciates every work of art to the maximum? Surely that would be excessive, even if it would not be excessive for God to ensure that she has some aesthetic appreciation. That is, God has no duty to make Linda into the most esthetically appreciative person conceivable—just as he has no duty to make Anne and Jean into the happiest and most knowledgeable people conceivable. He could achieve these things, but it would be excessive. It looks more like a form of moral obsession than a sensible moral outlook, like making sure not one speck of dirt remains on the kitchen floor.

            What should we conclude from this? The first thing is that there is such a thing as moral excess. This is not the same thing as acting in a supererogatory manner: that is not a form of moral excess, just a commendable wish to go beyond the call of strict duty. Moral excess is a kind of mistake, not a desirable trait. It is a miscalculation about what one should do. This means that any moral theory that recommends such excess is wrong about the nature of obligation and right action. It is just not true that we have an obligation to maximize the good—though we may well have a duty to bring about a certain amount of good. Ideal consequentialism is therefore false. Specifically, we don’t have a duty to rectify trivial suffering, especially of a normal human kind—such as the odd melancholy thought or a minor twinge or a little throb of lust. Nor does anyone have a duty to educate everybody in everything, or work assiduously at improving everyone’s appreciation of art no matter how much of an aesthete they already are. Such general prescriptions are just far too general and onerous; what is needed is a more qualified principle—such as that people should be made moderately happy, fairly well educated, and not devoid of aesthetic appreciation. The other thing we should conclude is that if God is defined using the very strong principle, then God does not exist. I say this not because of the problem of (mild) evil but because moral excess is not an admirable quality, and God must be admirable in every way. If God thought that he was obliged to maximize the good in the very strong way, then I would think he was in error and didn’t understand the nature of moral obligation. But then he would not be a perfect being and hence not be God. Someone who can’t see that extreme pain imposes a duty to help is morally deficient, but someone who thinks that he is obliged to attend to every little discomfort is morally excessive. What if he felt this obligation intensely, berating himself for failures to carry out his moral duty? This is not sainthood but a form of madness. Moral excess is not a way of being moral.

 

Share

Moral Distance

                                               

 

 

 

Moral Distance

 

 

We tend to think that our moral obligations fall off with distance: the closer someone is to us the greater is his moral claim on us, and the further away the less. Morality operates like gravity—it weakens with distance. True, morality is an expanding circle, but it is also a diminishing sphere. At the outer limits it hardly gets a grip at all. It would be a mistake to interpret this notion of distance as mere spatial separation, though that seems to be one component of it. In addition to distance in space there is also distance in time: the more temporally remote some future person is the less hold she seems to have on us morally. What obligations do I have to people in the 30thcentury? If I have any, they are not so strong as the obligations I have to people now. Whether this is rational or morally justifiable is a debatable question, but as a descriptive truth about our moral attitudes it is surely correct. Further, there is the dimension of personal contact or emotional proximity: the more intimate my relationship with someone the greater the obligation I feel. This applies to family, spouse, friends, colleagues, and so on. It may be that spatiotemporal distance is really just correlated with this dimension, which is the underlying factor; relationship-distance is the main consideration. We might also add psychological similarity: we tend to regard beings similar to us psychologically as deserving more of our moral concern—humanlike, mammalian, warm-blooded, non-alien. Thus our contemporaneous close relatives have a stronger claim on us than a jellyfish-type creature living in a remote galaxy two million years from now. Moral distance is multi-factorial and complex not just a matter of physical miles. It introduces degrees of obligation into moral duty instead of just the all-or-nothing binary opposition of duty and non-duty. It also introduces uncertainty and messiness into our moral calculations.

                It is helpful to picture the diminishing moral sphere as follows. At the center lies the ever-present self: this is the being minimally distant from the moral agent (they are identical) and it has a uniquely strong influence over our decisions. Given that prudence is also moral concern for one sentient being among others—I am a valuable being just like everyone else—we can think of prudence as the basic case of moral obligation. I am obliged to be concerned about my own interests and I am extremely close to myself. I am the center of the sphere of my concerns and others radiate outwards from me. The next closest being is then a matter of individual variation: it could be my spouse or my parents or my children or my friends, depending on circumstances. Then we get to the much more extensive circle of my general acquaintance. After that we have members of my local community perhaps; then other countries; then other species; then the next generation; then more remote generations; then beings in other galaxies; finally completely alien life-forms in remote regions of space millennia hence. My own interests come first (other things being equal and given a degree of selfishness) and then the interests of others according to their place in the sphere. This is the whole sphere of my moral obligations, and it varies in degree of demandingness. Most obviously, there is variation in the strength with which I am obliged to reduce or prevent suffering.

            Imagine if someone inverted this ordering of moral priorities: she treats the more remote beings as having a stronger hold on her moral concern. Creatures in the distant future on remote planets that are psychologically dissimilar to her occupy the center of her moral universe, while family and friends have merely marginal moral interest for her (she might even regard herself as morally negligible). That would certainly strike us as bizarre, insane even, but it is not easy to see how we could persuade her that it is irrational or immoral (she might point out that they are suffering sentient beings too, equally deserving of respect and care). But by the same token it is hard to see how we could be deemed irrational for our ordering. Nor does it seem justifiable to insist that only equalconsideration is rational or moral, so that we must treat spatiotemporally remote beings as morally interchangeable with our nearest and dearest. In fact, the distribution that seems the most natural is precisely the one that we adopt—despite the fact that no obvious foundation for it can be produced. Perhaps it is just psychologically necessary for human beings or other evolved creatures, or even for all beings with emotions directed at others: no other moral psychology is feasible given the basic nature of sentient beings. Ought implies can, so there is no point in reprimanding us for favoring the more proximate beings. Not that we can or should have no concern for the remote and alien, but it must of psychological necessity be diluted and relatively undemanding. If this means that we cannot occupy an entirely impartial and objective moral perspective, then so be it; at least the perspective we have is workable and not too destructive or callous. Brain surgery that changed our moral psychology so as not to discriminate against the distant and different might totally wreck everything that makes human life worthwhile, or even possible. How could you marry someone who systematically favored the remote over the proximate? What would happen to loyalty, trust, solidarity, etc.? What would happen to family life if parents treated every child in the world as deserving the same care and attention as their own?

            It is not an easy matter deciding how robustly moral obligation extends to the distant objects of possible concern. Morality has not evolved with these quandaries in mind. For instance, we have not had to think about how our actions will affect the wellbeing of people in future generations, as in climate change; nor did our ancestors put much thought into our obligations towards animals. I don’t want to argue that our current distribution of moral concern is correct and beyond reproach, only that it is not irrational to treat distance (in the multi-factorial sense) as morally relevant (certainly we should be careful about trying to reconfigure our psychology to adopt a more impartial point of view). I would not, for example, be happy to see support for foreign aid curtailed in favor of a supposed more pressing need on the part of future generations, or the political plight of the Venusians. This is also not a point about favoring humans over non-humans: I am all for treating our own animals as having moral priority over more distant animals, because this exhibits the kind of relational closeness that confers moral priority (though I also think remote animals do deserve some moral consideration). Some sort of moral ordering seems inescapable, but whether we have it right now is another question. What we should not do is try to motivate concern for our fellow man (and other animals) by appeal to some perfectly general principle banning all forms of moral distancing, as if every sentient being in the universe had an exactly equal claim over us.  [1] Things are more nuanced than that, and more difficult to resolve.

 

  [1] People sometimes say that we should try to occupy a God’s-eye view of creation, morally speaking, treating all sentient beings equally. But God does not exist in space and time, and he has no selective emotional relations with human beings and other animals. We do, and it is folly to try to make us take up a Godlike moral perspective. We can take this perspective into account, but we shouldn’t be governed by it, on pain of possible psychological collapse. In any case, there is no demonstration that the diminishing sphere that we habitually operate with is irrational or immoral.

Share

Memory Illusions

                                   

 

 

Memory Illusions

 

 

In Jane Austen’s Mansfield Park, Fanny Price, a thoughtful and unassuming young woman, makes the following observations to a certain Miss Crawford: “If any one faculty of our nature may be called more wonderful than the rest, I do believe it is memory. There seems something more speakingly incomprehensible in the powers, the failures, the inequalities of memory, than in any other of our intelligences. The memory is sometimes so retentive, so serviceable, so obedient—at others, so bewildered and so weak—and at others again, so tyrannic, so beyond control!—We are to be sure a miracle every way—but our powers of recollecting and forgetting, do seem peculiarly past finding out.” (538) The author then reports Miss Crawford’s reaction: “Miss Crawford untouched and inattentive, had nothing to say; and Fanny perceiving it, brought back her own mind to what she thought must interest.” (539) Here we see Jane Austen at her most philosophical and, gratifyingly, mysterian. She clearly finds memory fascinating, but also “peculiarly past finding out”. She also realizes that such questions are not for the shallow of mind. The philosophical reader duly warms to Miss Price, who is already very much in our good books on account of her virtue and modesty (in her inventor we see yet another sign of genius).

            Austen is comparing memory to our other “intelligences”, presumably including perception and thought. Her observation is that memory is more arbitrary in its powers than they are. To be sure, perception and thought have their failures and breakdowns—they are subject to error—but these weaknesses are fairly regular and predictable, while the powers of memory vary for no discernible reason. Sometimes we cannot for the life of us remember what we want to, but at other times memory pursues us relentlessly, refusing to relinquish its contents no matter how we may feel about them. Perception is not subject to the will, while thought is, but memory is partially subject to the will—sometimes under our control and sometimes very much not. The reasons for this are obscure: why do we so vividly remember some things, often seemingly insignificant, while others slip too easily from memory? I cannot ever choose what to see, while I can always choose what to think about, but I have partial choice when it comes to remembering things. Thus memory seems poised somewhere between perception and thought. It is quasi-perceptual and quasi-cognitive.

            Then there is the question of its scope and limits. Perception is limited to the present moment and the impinging environment, though it handles an immense amount of information simultaneously. Memory has a broader scope, taking in large tracts of the past and being relatively independent of time of occurrence (you can often remember your childhood more vividly than last year). Perceptions rapidly come and go, while memories can linger indefinitely. Thought ranges more widely still, taking in the future as well the past and present, and including things not perceptible at all (you can think about atoms but you can’t remember what they did, not directly anyway). So memory falls between these two poles. My question is whether it is subject to illusions. There are clearly perceptual illusions, while thought is not vulnerable to illusions (though error is commonplace), but what about memory? Does memory sometimes give rise to illusions of the past? Errors, certainly, but are there also actual illusions? I don’t mean memories of past perceptual illusions, like remembering that Muller-Lyer illusion you saw yesterday; I mean specifically memory illusions, where a memory impression misrepresents a past event. If so, what are these illusions, what types do they fall into, what laws govern them? Are they analogous to perceptual illusions or are they sui generis? The question is not an easy one, despite its simplicity. On the one hand, memory has a sensory, particularly visual, dimension: we have sensory image-like impressions of the past. These can be inaccurate, as when one remembers a face with the wrong color eyes or a page with a word misplaced. It is tempting to describe this phenomenon as a visual illusion of the past not merely as a false belief. One can also have a complete hallucination of the past, as when one appears to recall a past event in full vividness that simply didn’t happen. These seem rather like mirage cases or seeing two equal lines as different in length: a sensory misrepresentation of the world, but relating to the past state of the world. The memory system has delivered up a mental representation that fails to fit the “stimulus”. False beliefs may be formed, as in the perceptual case.

Yet, on the other hand, what we don’t find are reliable, predictable illusions generated by certain types of configurations of objects—as in the moon illusion or the Muller-Lyer illusion. It is not that whenever one remembers the moon it always seems larger than it really is, or that memories of adjacent parallel lines always present them as unequal in length. Memory “illusions” are haphazard, unsystematic, and unrelated to the properties of the stimulus; they are more like errors of belief in this respect. There are apparently no laws of memory illusion analogous to the laws of perceptual illusion (faces with brown eyes don’t always produce memories of blue eyes).  [1]  Moreover, memory errors are correctable in the light of contradictory knowledge, unlike perceptual errors, which are incorrigible in the light of true belief.  [2] Thus the moon illusion and the Muller-Lyer illusion persist even when one knows quite well that the facts are otherwise than they appear, but memory impressions are permeable by extraneous knowledge—you will not keep on remembering things a certain way once you have been enlightened (at least there is the possibility of a change of impression, unlike in the perceptual case). This makes sense, given that memory lies somewhere between the perceptual systems and the central cognitive system. Memory thus seems vulnerable to something like perceptual illusion but also unlike it. It is neither fully one thing nor the other (as Jane Austen intimates).   

            I think, then, that there is no clear answer to my question, because memory provides a counterexample to the dichotomy between perceptual and cognitive error. Perhaps we can say that it gives rise to quasi-illusions, where the qualifier simply indicates being betwixt and between. In other terminology, memory is neither an encapsulated module nor a general-purpose ratiocinator. On balance, I would say that it is not susceptible to illusion in the strict sense, but that it does give rise to sensory misrepresentation as a matter of course. In fact, it is more prone to sensory error than the senses themselves, being less governed by psychophysical laws; but these errors are not as rooted in the architecture of the system as perceptual illusions. The senses systematically act in ways that can defy our reasoned view of things, as autonomous informational agents; but memory is not so cut off from reason, not so independent of cognition in its operations. It is puzzling and counterintuitive, quite exceptional in our psychological economy; whether it is “peculiarly past finding out” is another question, but not one to dismiss. It is noteworthy how little philosophical reflection has been devoted to it in comparison with perception and thought.  [3] As Miss Price remarks, no doubt we are all “miracles”, but memory seems especially impenetrable even in its most quotidian operations.

 

  [1] The closest thing I can think of concerns memory impressions of elapsed time. We remember a busy time as passing quickly while a boring time is remembered as passing slowly, and this seems somewhat lawlike. But even here the illusion has its source in the contemporaneous perception of time in the two cases not in an inherent tendency to illusion in memory as such.

  [2] See Jerry Fodor, The Modularity of Mind.

  [3] In contrast memory has been a staple of psychology since its inception, with much experimentation and theory devoted to it. I don’t recall ever hearing the question of memory illusions discussed, though failures of memory are routinely studied. Psychologists recognize how perplexing memory is, though I have not heard of one who takes Austen’s mysterian line.

Share

Limits of Reality

 

 

Limits of Reality

 

We speak of the limits of human knowledge or reason or our conceptual scheme, but we don’t tend to talk about the limits of reality. Knowledge is limited relative to reality—it is not as extensive as reality—but reality cannot fail to be as extensive as itself. But it may be less extensive than some conceivable alternative, so that there is at least a sense to the question. Is there some type of reality that is less limited than our reality? Or is our reality—actual reality—unlimited, perhaps essentially so? This seems like an interesting metaphysical question, though not one I have seen asked. People have certainly discussed the question of the infinity of the universe (Spinoza), but that is not the question I have in mind. We can agree that numbers, space and time are infinite, and therefore limitless, but a universe containing only one of these things—or all of them—would still be limited in the sense I intend, because there would be many perfectly conceivable things that would not exist in such a universe. The series of numbers has no limit, but a world of only numbers would be vastly more limited than our world. An infinite world can be a sharply limited world—an ontologically impoverished world. Intuitively, we are speaking here of how many kinds of thing there are; and the question is whether our world is maximal in this respect.

            Suppose we think that reality consists of three kinds of thing: physical, mental, and abstract. Is this a limited number of kinds of thing? Could there have been many more kinds of thing (ontological categories)? Apparently not: we can’t imagine any further basic type of entity that just contingently happens not to exist in our world. A hard-nosed materialist might contend that nothing could exist that is not material, so that a purely material world is not a limited world (relative to some conceivable world). Of course, there could be many more particular material things, but there may be no limitation in the kinds of things that exist. Similarly, there is limitation in the animal species that exist (or chemical substances), but that entails no limitation of basic categories. A world without life is more limited than a world with life, but not a world without cockroaches. If our world lacks deities, then it is limited relative to a world that has them; so it might be a fundamentally limited world (or it might not be if there are deities in it).

            Consider laws of nature and logic. They constrain what can happen, so they function as principles of limitation. In a world without such laws there would be no limitation on what can happen. That is a very different world from ours (though it may not be a genuinely possible world). Moreover, there are only a few basic laws of nature in our world; maybe there are worlds containing many more basic laws (could there be more logical laws in some possible world?). We can think of these laws as laws of limitation: they force our world to be limited in certain ways. A law tells you what can and will happen, but also what can’t and won’t happen. Or consider the speed of light: this imposes a limit on the speed of any projectile. Objects are not at liberty to move at any arbitrary speed; there is a sharp limitation. Likewise the universe is not free to reverse entropy; it is limited to increasing disorder.

            Limitation is the flipside of possibility. If something has a certain nature (and everything does), then it has a range of possibilities and a range of impossibilities. This is as true for whole universes as it is for individual objects. Limitation is thus normal, predictable, and inescapable. Part of the nature of our universe is to be infinite in various ways, but this does not negate the fact that it is a limited reality. We marvel at the richness of nature, its extent, its possibilities, but in fact it is quite sharply limited. It is more limited than our imagination (which itself is limited by our nature as cognitive beings). We might even say that reality is subject to ontological closure: it is not unlimited in what it contains. Space, say, though infinite in extension, is quite confined with respect to its dimensions—having only three (or nine if you follow certain versions of string theory). The brain is quite limited in its physical nature, consisting of only certain chemicals in certain structures and subject to certain forces. Cognitive closure is one kind of ontological closure—one kind of natural limitation. Gravity prevents bodies from moving in certain ways, and the brain’s physical construction prevents it from functioning in certain ways, including cognitively. Everything has limitation built into it, despite its impressive range of potentialities. The whole universe is accordingly an entity subject to inherent limitations. We don’t appreciate these limitations because we experience only this universe—while we experience many different kinds of individual objects—but our universe may be quite sparsely populated and inept compared to other possible universes. The universe might give rise to an illusion of richness: we are awestruck by its variety but only because we know nothing better. Maybe God kept it nice and simple so as not to overburden our limited minds; other universes might make ours look simple and dull. After all, our universe began in something quite lumpen and undifferentiated—the Big Bang—and it can only reflect that initial event. Surely the material condition of the universe at that point was sharply limited by the prevailing physics, and the subsequent history of the universe consists of iterations of what was then present. Reality is limited by the initial conditions that made it possible.

            When we contemplate the galaxies and the variety of species and our own minds we are stuck by the richness of the universe, even conceiving it as limitless, but this is an anthropocentric perspective. To a more objective point of view the world must appear as confined—as just one type of reality amid others. It may not be the least interesting of all possible worlds, but it may be far less interesting than we tend to suppose. It may be a garage sale compared to the Harrods of some other world. In pre-Socratic style we might picture the universe as the product of forces of expansion and constraint. Many new things come into existence, expanding the inventory of things the universe contains; but this happens against a background of curtailment as the universe prevents things from happening and possible things from being formed. It limits as it creates. It is as preventive as it is generative. It operates with an ontological Occams’s razor, ruthlessly paring away at reality. When God created the universe he created limitation on being as well as plenitude, denial as well as affirmation.

 

Share