Good, Evil, and War

                                               

 

 

 

 

Good, Evil, and War

 

 

It is easy to see how two evil states may go to war. They may have conflicting interests that they seek to settle by means of organized violence: for instance, they may have designs on each other’s territory or wealth. It is also easy to see how a good state and an evil state might go to war—as when the evil state attacks the good state and the good state defends itself. But it is not easy to see how two good states could be at war, since neither will engage in unjustified aggressive acts towards the other. They may have conflicting interests, but they will not seek to resolve these conflicts by armed combat. In war at least one party has to be an evil actor. So we can always infer from the existence of a war that one side at least is evil.

            It might be objected that this generalization (“the first law of war”) cannot be quite right as stated, since it is logically possible for two good states not to perceive each other as good. What if each state views the other as having evil intentions even when it does not? Will they not then be capable of war? It is quite true that a state can misperceive the moral standing of another state (“the Great Satan”, “the Evil Empire”), but in such a case war will not be the outcome, because the perception of evil will be addressed by peaceful means, i.e. diplomacy. The good state will endeavor to discover whether the perception of evil that it has of the other state is well grounded: that is, it will apply the principle that states are innocent until proven guilty—even if some prima facie evidence of guilt exists. Misunderstandings in human relations can occur, but a good state will seek to remedy such potential misunderstandings. Only if a state is not good will it allow mistaken impressions of evil to persist, leading to the possibility of war.

            There is another logically possible case to consider: two evil states that perceive each other as good. Will they go to war? They may, because the perception of goodness may not outweigh the evil ambitions of the state in question. Perceiving the other as virtuous is not usually sufficient to deter violent action against the other. In addition it is unlikely that an evil actor will openly admit the virtue of its target when pursuing its self-interest, whether person or state. That is why war is always accompanied by propaganda alleging the evil of the enemy.

            It follows that without evil there would be no war; to abolish war we simply need to abolish evil. War is certainly not an inevitable consequence of autonomous states with conflicting interests, even vital conflicts. History need not be the history of war. Abolishing evil isn’t easy, to put it mildly, but at least we now know how to prevent war, as a matter of principle. There is nothing inevitable about war.

            It might be wondered whether there can be such a thing as a virtuous state, given how large and complex states are. Are any states today virtuous states? If any are, they are certainly very few. But states can become more virtuous, and as they do so they become less likely to engage in war. On the other hand, vicious states will always be at war as part of their natural condition. The more virtuous a state is the less prone is it to war. If we are interested in abolishing war, we should work to promote national virtue at the political level. Decreasing the hatred of foreigners will be part of that effort, but many other things will be involved.

            These banal points apply to a wider range of human interactions than wars between modern heavily armed nation states. The OED defines war as “a state of armed conflict between countries or groups within a country”. But the concept of war has a wider application, as in the “war of the sexes” or the “war on terror” or “the war on drugs”. There are wars between families, neighbors, religions, races, and relatives. Not all of these wars involve bombs and guns: some are waged with words or discrimination or snobbery or laws or sanctions. In all these cases, however, the same basic principle applies: there must be at least one evil actor. It may not always be clear who the evil actor is, since both may be engaged in violent acts—though some of these may be just acts of self-defense. But a pair of virtuous actors can never be at war; someone has to be culpable.

            We don’t want to count boxers, brawlers, and duelists as engaged in war, even though they are engaged in violent conflict. To be at war implies something in the way of long-term strategy and purpose: it takes more than one battle to make a war. The rules of combat are also much less fixed in war, where the actors typically deem it acceptable to use whatever methods they have available to ensure victory. This is why rules of war have been introduced, though they are continually flouted. The natural condition of war is lawless ruthlessness. Paradoxically, the more ethical codes of war become, the more likely it is that actors will engage in war, since war will not be prosecuted in the complete absence of ethical restraint. A nation at war will not accept defeat if it can win by breaking a few rules, though a typical boxer will concede defeat even if he could have won by fouling. In the latter kind of case both parties may be virtuous actors, consenting to the violence that occurs. Mass boxing matches are not wars. For a genuine war there has to be evil on at least one side.

            It is notable that the rhetoric of war always involves imputations of evil against the enemy. No one ever says they are going to war with X even though X is a thoroughly decent person, tribe, or country. War can only be justified by allegations of evil: military evil, religious evil, economic evil, etc. And a country at war always tries to elevate its moral self-image. This is because of the conceptual link between war and evil: war is necessarily what is prompted by evil. If two countries are equal in virtue and yet have conflicting interests, they will justify waging war by imputations of evil—never by acknowledgement of moral parity. Two countries that are morally comparable may justify a state of war by insisting that the other country embraces an evil ideology, even when there is no significant difference in human conditions in the two countries. War can never be conducted in full open awareness of the other side’s virtue. Even a war of plunder will be represented as a moral crusade.

            We will therefore not understand the nature of war if we try to represent it in morally neutral terms. Wars are not armed conflicts stemming from conflicting interests or divergent ideologies—or else good actors (on both sides) could be involved in wars. The concept of evil is essential to the concept of war: actual evil and perceived or putative evil. A war is not like a chess game or a game of tug of war or an arm-wrestling contest. Wars are not a subspecies of games. I would define a war as an organized purposeful armed conflict in which one or both parties are evil (bad, wicked) and are judged to be so. The evil may take different forms, as may the arms that are employed, but it will always involve morally unjustifiable actions. It is perfectly possible for one party to a war to be entirely blameless, in spite of its violent actions, but then the other party must be blameworthy. There cannot be entirely blameless wars. Virtuous agents can never be at war. Whenever a war is in progress one side or the other is guilty. This is why human wars are fundamentally different from group violence among animals (unless we suppose that some animals are capable of evil). War is the natural expression of human evil; it is not some kind of fact of nature or inherent political tendency. Great powers do not inevitably clash; they do so only because of specific identifiable acts of wrongdoing. War is not a historical or political inevitability but a choice to act on evil impulses. There are no “good wars” in the sense that virtuous nations can find themselves at war with each other. If two states find themselves heading toward war with each other, they should always ask where the moral fault lies, and never assume that war is unavoidable.

 

Share

Formal Languages and Natural Languages

                       

 

 

 

Formal Languages and Natural Languages

 

 

Philosophers of language have argued over the relationship between so-called formal languages and natural languages (the kind people regularly speak). Some say formal languages supply the underlying logical form of the sentences of natural languages; some say formal languages improve on natural languages; some say formal languages are technical inventions that distort natural languages; some say they reflect the innate language of thought unlike natural languages; some say they are pointless abstractions bearing no meaningful relationship to natural languages. The correct answer, however, is this: they are part of natural language. Every formula of a formal language can be read as a sentence of English (or any other natural language), though the resulting utterance may sound stilted (“there is something x such that for everything y…”). A so-called formal language is really a formal notation, where “formal” just means, “looks like mathematics”; the language is good old ordinary language, suitably configured. All we are doing is translating some strings of English words into other strings of English words, as when we replace (say) “Everyone loves someone” with “For every person there exists another person such that the first person loves the second person”. The symbols of the formal language are just invented signs for words we already have–for example, using a backwards upper case “E” to stand in for “there exists”. I take it this is completely obvious.  [1]

            But it makes a difference to how we view certain kinds of proposal. The theory of descriptions, say, is just the proposal that one kind of sentence of natural language containing “the” can be translated into another kind of sentence of natural language containing only words like “there is”, “for all”, and “uniquely” (itself translated by the word “identical” suitably positioned). We are using one part of natural language to paraphrase another part, invidiously preferring some sentences to others. This cannot be an improvement on natural language, since it isnatural language; we might just prefer one part of it to another for philosophical or other reasons. Nor can the preferred part be the underlying form of the other part: for how could one part of natural language be the underlying form of another part? Both are overt sentences of the language, neither “deep” nor “superficial”. We may describe one sentence as the analysis of another, but how could one sentence literally contain another? By the same token, the “formal” part could not be inferior to natural language, though it might not exhaust the full resources of natural language. There is no such thing as an opposition between “logical language” and “ordinary language”: both are just versions of natural language. All we can really talk about is whether one or other part of natural language is preferable for certain reasons or purposes. We can argue about the utility or perspicuity of certain notations, which are just abbreviations for natural language expressions; but we cannot argue about the relationship between natural languages and some other type of non-natural language.  [2] Everything we can say (or write) is part of natural language, which is why we can always convert a logical formula into familiar words of the vernacular. Thus a theory of truth for a formal language is in reality a theory of truth for one section of a natural language. A so-called formal language is not some sort of transcendent symbolic system standing outside of natural language. What logicians have done is simply invent a code for a certain part of the language they already speak. 

            The innate and universal language of thought can be externalized in different kinds of notational system—including English and Japanese, predicate calculus and modal logic—but these are all systems that express the same underlying cognitive structure. If we call this structure LANGUAGE, then what are called formal and natural languages are just different ways of externalizing LANGUAGE. And the formal (mathematical-looking) mode of externalization is really just a part of the natural-language mode of externalization. There is no opposition here, no rivalry, no better or worse. Principia Mathematica is actually a piece of ordinary English. We might say that logicians speak a certain dialect of their native language. A formal language is just so much (stilted) informal language; it is not something standing magnificently apart from the common language we all speak. We use different parts of our native language for different purposes, altering our vocabulary and style; a so-called logical language is just one such variation.

 

  [1] There exists the theoretical possibility of a formal language not expressible in the sentences of natural language: then the standard range of options regarding its relationship to natural language would be available. But that is not the situation in which we find ourselves, which is that logic texts simply consist of ordinary language written in novel orthography (variables, brackets, etc).

  [2] In the same way we can meaningfully talk about the relative merits of different human languages, say English and French, but this is clearly an intra natural language issue, not an issue about a natural language versus an unnatural language. In what way is a logical language unnatural? It is so only in the sense that it is visually unfamiliar and awkward to speak. We use the same linguistic competence in the logic classroom that we use in daily life.

Share

Forced Knowledge

                                               

 

 

 

Forced Knowledge

 

 

 

We are schooled in various dichotomies dividing up the field of human knowledge: a priori versus a posterioriknowledge, infallible versus fallible knowledge, implicit versus explicit knowledge, innate versus acquired knowledge, basic versus derived knowledge, and so on. These are all worthy of the attention of the epistemologist, and have duly received it. I propose to introduce a new dichotomy that has not been investigated, nor even recognized: the distinction between what I shall call forced knowledge and optional knowledge.  

            By optional knowledge I mean the kind that is acquired intentionally, or which can be so acquired. Scientific knowledge is a good example: procedures are undertaken that result in knowledge, and these procedures are followed voluntarily—experiments, observations, calculations, etc. Also commonsense knowledge: if I acquire knowledge about what is in the next room, by going in there to have a look, then I am acquiring optional knowledge. Optional knowledge is the kind you are free to acquire or not to acquire: you can choose knowledge or ignorance. You can open or close your eyes, block your ears or not, smell or refuse to, taste or decline to. Nothing compels you to know the things known by these methods. You can learn mathematics or not bother, study history or give it a miss, fill your head with geographical facts or remain a geographical ignoramus. Education consists of exercising the ability to gain optional knowledge; the will is involved, along with hard work and dedication. A great deal of knowledge is knowledge that you could have failed to have—knowledge that does not come with the territory, but enters by decision and action. You have the option of not knowing, though you may also choose to know.

            By forced knowledge I mean knowledge that you can’t help having, that you can’t avoid, that is not a matter of will. You have it whether you like or not; it is built into you, not brought to you. It is involuntary, inescapable, and automatic. The knowledge is forced upon you; you have no say in the matter. The most obvious example of forced knowledge is innate knowledge—knowledge you are born with, so not acquired intentionally. This is knowledge you cannot decline to possess. But that is not the most interesting example of the type: there is also knowledge of one’s own mind. You cannot decline to learn about, and acquire knowledge of, your current conscious inner states—you are condemned to know about these things. There is no escape, no avoidance, and no decision. Like it or not, you have to know about your own inner life; this is not something to which you can turn a blind eye or a deaf ear. It cannot be turned off or shut down or otherwise disrupted. It is self-intimating in the sense that it automatically, necessarily, registers on you: it imposes itself on you. You are, as it were, its victim.  [1]

            It is not the same with any unconscious mental states that you might have: these do not produce forced knowledge, since they are hidden from awareness. You can choose to know about your unconscious mind or choose not to. But you cannot choose to know about your conscious mind. This is not because the conscious mind is identical with knowledge of itself, so that knowledge comes with the conscious mind trivially. States of consciousness and knowledge of such states are distinct existences: pain, for example, is not identical with knowledge of pain. Still, there cannot be pain without knowledge of it; you cannot decide to know nothing more of your pain, as you can decide to know nothing more of someone else’s pain. There is, so to speak, no gap between consciousness and knowledge of it, such that that gap can be turned into ignorance—as there is a gap between my knowledge and your consciousness. The knowledge in my own case is immediate and therefore unavoidable.

            Among the objects of this type of forced knowledge we can distinguish four broad categories: sensations, thoughts, emotions, and meanings. We cannot avoid knowing about our sensory states and bodily sensations; we cannot avoid knowing what we are thinking; we cannot avoid knowledge of our emotions; and we cannot avoid knowing what we mean by our words. Each of these kinds of epistemic forcing has consequences for the nature of our psychological life: we must know what it is like for us perceptually at any given moment; we cannot shield ourselves from our own thoughts; we always know our state of emotional wellbeing; and we cannot fail to know what meaning we are trying to communicate. So we have to contend both with the conscious state itself as well as the distinct state of knowing about it. For example, we have both the emotion of depression and the knowledge that we are depressed: both contribute to our overall psychological state (similarly for joy and so on). In the case of meaning we must have a complex of communicative intentions plus self-knowledge with regard to those intentions–and these are inseparable. Thus we cannot help knowing what we mean by what we say: that is part of what meaning is. The theory of meaning must therefore acknowledge that meaning involves forced knowledge: the speaker must know what she means, even though the hearer may or may not know it. I can decide to find out what you mean, but I cannot decide to find out what I mean—I am condemned to semantic self-knowledge. Meaning is something such that the agent of it must know what she is agent of. No one can mean something and be in the dark about what she means.  [2]

            Forced knowledge is not the same as infallible knowledge. A person has infallible knowledge when her beliefs cannot fail to be true; a person has forced knowledge when she must have certain beliefs—that also happen to be true. In the case of forced knowledge, there isn’t the option of suspending belief, but in the case of infallible knowledge there is that option. Descartes teaches us that whenever we believe that we think we do think, but this is not the same as to say that we cannot help believing that we think. Similarly for existence: infallible knowledge of our existence is not the same as forced knowledge of our existence. It is true that the two tend to go together, since both characterize knowledge of the inner; but they are different concepts. I cannot avoid the knowledge that I exist—this is part of what it is for me to exist—and my belief that I exist cannot fail to be true. But I might be infallible about my existence without having it always before me. Descartes could have added to the Cogito: “I exist, therefore I know my existence”. I can avoid knowledge of the existence of others, simply by hiding away somewhere; but I cannot hide from my own existence—it is always evident to me. I am forced to know that I exist for as long as I (consciously) exist. I am also forced to know what I feel, think, and mean, whenever I do or undergo any of these things. I cannot shield myself from such knowledge, or lazily fail to pick it up, or simply turn my mind to other things. It is impossible for me to be ignorant about these things.

            We usually don’t like being forced into things; we value our freedom. We accordingly might resent epistemic bondage—why should I be forced to know things I would rather not know? I don’t want to know that I am depressed or angry or have compulsive thoughts or say mean things to people—but I am forced to know these things against my will. If someone offered me the chance to avoid such self-knowledge, I might well take it: my life might be happier that way. We avoid knowledge of the mental states of others where it is convenient, so why not avoid knowledge of our own mental states when it suits us? It would nice to be able to turn it on and off at will. That way we would increase our freedom. But this is a fantasy of freedom: it is a deep fact about the human condition that we are condemned to self-knowledge—as we are condemned to other-speculation. It is difficult to acquire knowledge of the minds of others, maybe impossible, but it is all too easy to acquire knowledge of one’s own mind: the former is distinctly optional, the latter utterly forced. We can glide over the minds of others or ignore them entirely; but we cannot avoid the reality of our own mind—we are compelled to know ourselves (in our conscious part). The oracle commanded, “Know thy self!” but in one sense the response must be, “How can I not?” The oracle might be interpreted to mean: “In addition to forced self-knowledge, there is also optional self-knowledge you really should try to obtain, concerning things that lie outside of your immediate awareness”. Perhaps the oracle was being partly ironic, since it must have been well aware of the inescapability of self-knowledge of a very ordinary kind.

            It would be nice to have a theory of why self-knowledge is forced: what is it about the conscious mind that compels knowledge of it? And do other animals have the same kind of non-optional knowledge? What kind of knowledge is it—how conceptual is it? Is it anything like perception? Is it causal? Is it reason-based? These are general questions about introspective knowledge, well recognized; we need to add the question of force—what makes introspective knowledge forced knowledge? My aim has been to identify the category and illustrate it with the example of self-knowledge.  [3]

 

  [1] Two other candidates for forced knowledge are logical knowledge and proprioceptive knowledge. A good case could be made that knowledge of basic logic is unavoidable: we are forced to recognize the validity of certain kinds of inference, since this is constitutive of being a thinker at all. I cannot choose to be ignorant of modus ponens, say. In the case of the proprioceptive sense, we cannot turn if off as we can the other senses: I can’t, at will, block my sensory access to the position of my body, as I can close my eyes or stop up my ears. I might be able to block proprioceptive knowledge by undertaking brain surgery or going to sleep, but I cannot in the normal course of events avoid this kind of knowledge. The concept of forced knowledge thus gathers a quite diverse group of knowledge types, not gathered by other epistemic notions. 

  [2] There is a question about whether we can choose to attend to our inner states, or not attend to them. I might choose not to attend to a mild pain in my arm—so I don’t suffer from forced attention in such a case. This doesn’t mean I can choose not to know about the pain, since knowledge can be acquired without the use of attention; and there are limits to how inattentive I can be to my inner states—I can’t choose not to attend to an intense pain or a compulsive thought. In the special case of meaning, it is difficult to see how we could mean something and fail to attend to what we mean: communicative speech acts require attention (probably because of the nature of the intentions involved). In general, however, attention is more optional than knowledge—attending is an act.

  [3] I hope it is clear that I am not saying that it is possible to “decide to know” in the sense in which it has been denied that we can “decide to believe”. We cannot, in this sense, decide to know or believe—and the concept of optional knowledge is not intended to conflict with that. The point is rather that we can decide to find things out, or decide not to—that is, undertake procedures that will produce knowledge and belief. In the case of forced knowledge, however, we cannot decide to find things out, or decide not to; we will be supplied with the knowledge anyway. The acquisition of knowledge is unavoidable in the one case but not in the other.

Share

Footnotes to Plato

                                               

 

 

Footnotes to Plato

 

 

“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.” This remark from Whitehead’s Process and Reality (Pt. II, ch. 1, sec. 1) is frequently cited, either as a tribute to Plato’s greatness or as an indictment of the stasis, and hence poverty, of European philosophy. The suggestion is that philosophy (of the Western variety) has not substantially progressed beyond the seminal work of Plato. It is not often contested. But even a casual examination of the history of the subject shows that it is quite mistaken. The main reason it is mistaken is that a footnote must be consistent with what it is a footnote to, but philosophy subsequent to Plato has been anything but consistent with Plato. So it is not plausible to suggest that this philosophical tradition is merely a series of footnotes to the work of Plato. The bulk of Western philosophy has, in fact, been in contradiction to Plato—a rejection of his central tenets.

            Take Aristotle, Plato’s most successful pupil: his philosophy is built around a rejection of Plato’s philosophy. The theory of forms is Plato’s most distinctive contribution, and Aristotle denies it. That’s not a fawning footnote; it’s a critical response, an outright repudiation. And most later philosophers sided with Aristotle—the theory of forms found few adherents in subsequent centuries. In fact, it was Aristotle, not Plato, whose corpus was heavily footnoted by posterity: medieval philosophy in Europe was a series of footnotes to Aristotle. He was loyally, not to say slavishly, followed. Nor did Socrates, Plato’s hero, have much of a following during this period. His skepticism was not admired and imitated, while Aristotle formed the basis of medieval scholasticism. Plato is really quite a subversive philosopher, not one for pious repetition. The forms don’t invite footnotes, but frowns; they can be seen as rivals to the divine. And people don’t want to be told that their ordinary view of the world is a complete illusion: that they live in an epistemic cave.

            Nor was modern philosophy much influenced by Plato, beginning with Descartes. Apart from the doctrine of innate ideas, there is nothing distinctively Platonic about Descartes’ philosophy. The same is true of the other modern philosophers. They broke out in new directions; they hardly seem to have read Plato. How is British empiricism a footnote to Plato? And more recent philosophy is likewise quite anti-Platonic: how on earth can we view the movements of twentieth century philosophy as footnotes to Plato? Where is the theory of forms in positivism, ordinary language philosophy, logic, semantics, materialism, and so on? I am more inclined to say that recent philosophy has been disdainful of Plato (wrongly so, in my opinion). Quine—a writer of laudatory footnotes to Plato? Wittgenstein—a devoted Platonist?

            So Whitehead’s oft-quoted remark is egregiously false. Why then do people keep repeating it? Well, it sounds clever, and it saves you from having to read anything since Plato. But perhaps there is a more charitable reading of it: not that post-Platonic philosophy is slavishly Platonic—a mere accretion of obsequious footnotes to the Great Man—but that European philosophy consists of a series of comments on Plato. That is, it consists of critical responses to Plato. If we take it that way, then talk of footnotes is quite misleading, since (as I said) footnotes have to be consistent with what they are footnotes to. It is quite another thing to say that European philosophy consists of a series of rejections of Plato. That has the look of something with a decent claim to truth, though very broad-brushed (not exactly “the safest general characterization of European philosophy”). On this interpretation, philosophy consists of a series of reactions against Plato.

The trouble here is twofold. First, it ignores the steady stream of Platonists and neo-Platonists that flowed from the original font: it may not have been wide, but it existed. Second, it is really not true that philosophy has consisted of commentary on Plato, either pro or con. Granted, some of it has taken this form, though largely via the influence of Aristotle–not surprisingly, since Plato’s works constitute the founding texts of European philosophy (anyone who gets there first will inevitably form the foundation of the tradition). But it is an exaggeration to suggest that philosophy since Plato has not gone beyond his concerns, intellectual framework, and conceptual apparatus. Aristotle certainly did, adding quite new topics and avenues of inquiry; he is not stuck in a Platonic universe, making small emendations to the master’s system. Nor is Descartes limited by Plato’s outlook, most obviously because of his interest in the new science. And twentieth century philosophy quite clearly expands well beyond the Platonic conception of the subject: it ignores Plato rather than dissents from him (compare the influence of Kant). He is regarded as distinctly passé. Even to take Plato seriously as an adversary, as Aristotle did, is alien to the spirit of recent philosophy.

            It is hard, then, to find any merit in Whitehead’s pronouncement. It is like saying that modern philosophy consists of a series of footnotes to Descartes—at best an exaggeration of the influence of a philosophical giant. So far from philosophy being a series of footnotes to Plato, I would say that subsequent philosophy should have footnoted Plato more. He should have figured more prominently in the discussions of the philosophers who followed him. It was probably the influence of Aristotle that kept Plato from his proper place in the footnotes of European philosophers. Gazing down from Platonic heaven, he is not so much gratified by all the footnotes extolling him as irritated at his lack of citation. After all, he was a very singular philosopher, by no mean a popularist.

 

Share

Extended Anatomy

                                                Extended Anatomy

 

 

It is customary to distinguish between an organism’s body and its environment. The environment is what exists outside the body. There is a definite boundary between the two. But how solid is this distinction? And does it matter to theoretical biology?  Might there be a better way to carve things up?

            Consider mollusks and their shells. Is the shell part of the body or part of the environment? We could say that the shell exists in the environment of its soft interior organism or we could reckon it to be part of the organism’s body. It seems arbitrary what we say and to be of little consequence. We might choose to say that the shell is not part of the organism’s soft tissue body, but is part of its overall body.  [1] Should we say that the shell protects the body without being strictly a part of it? What about hermit crabs that scavenge the shells of sea snails? The shells perform the same kind of function as the shell of the oyster, but the physical connection is less rigid. This function is like the function of thick hide or the armor plating of some reptiles. Whether the protective outer layer is detachable is beside the point: the function is the same. It seems clear to me that there is no point in insisting that one sort of layer is part of the body and another is not. Thus we can introduce the concept of the extended body. We are familiar with the idea of the extended phenotype—the idea that things outside the body of the organism can be part of the phenotype of the organism, e.g. beaver dams and bird nests. I am suggesting that the body can be extended too, so the body is not as local as may be thought. We might distinguish between the restricted body and the extended body or introduce other distinctions based on other boundaries; the important point is that the notion of the extended body is a well-motivated notion.

            Suppose we have an animal with a thick furry coat that it sheds in the summer and also carries around a large wooden container to sleep in and protect against predators. What belongs to its body? The coat has no feeling in it and is shed every year, so it has less claim to be part of the animal’s body than its fleshy innervated parts; yet it is not false to say this coat is part of the animal’s body. It isn’t part of what might be called the fleshy body, but it is part of a more extended body. Likewise the container is not spatially within either the fleshy body or the furry body, but it is part of a more extended body—what we might call the functional body. The container, like the furry coat, helps the animal survive—detachability is not to the point. There is no mileage in insisting that these things are not really parts of the body, but belong to the environment; there is no principled distinction to be drawn here. We can distinguish different types of body nested within each other, but we can also speak of the combination of all of them; and the latter is what corresponds to the biological notion of a functional unit. A good way to put the point is that the entities in question are (because they function as) organs of the body: the oyster’s fixed shell, the scavenged snail shell, the removable coat, and the portable container. Thus some bodily organs exist on the far side of the skin (not very surprising in view of claws and fingernails). We can call this the “extended body” in order to register the fact that other choices might be made about where the body ends and the environment begins; in reality, these “external” organs are just as much part of the body as kidneys, hearts or brains. There is no sharp theoretical line here.

            How far outwards can we extend? Here things become trickier, partly because we don’t have any good existing examples to work from. So let us invent some hypothetical cases to focus our intuitions. Suppose an organism uses suction pads to pick up bits of the world that it uses in various ways—as weapons, as sun protection, as temperature control, as mate attractors. Suppose one of its tricks, genetically determined, is to scavenge fur covering from dead animals, which it dons in cold weather. I say that all this cargo counts as part of the animal’s extended body not part of its environment (inasmuch as that distinction has much content once the extended body is accepted). Such an organism might even pick up large chunks of the world for its use: trees, boulders, lakes, mountains. This super-organism would have an extended body that includes large tracts of the physical world, normally supposed part of the environment not the body. By the same logic as before its extended body would extend massively into the environment. What if there was an amazing bird that made its nest in a tree but carried the tree around with it when it moved? This bird would be just like the hermit crab that carries its home around with it. Accordingly, we could reckon the tree to the bird’s extended body: fleshy body, feathery body, and arboreal body. But why does the bird need to carry the tree around in order for it to be part of its body? Isn’t a stationary tree also part of a bird’s extended body? It uses the tree for its biological purposes, as it uses its beak, feathers, and nest. For a bird, a tree is an organ of survival—a device of gene propagation.

            What about caves? If a bat lives in a cave, serving the same function as a shell, isn’t the cave also part of the bat’s extended body? The cave has the function of protecting the organism as far as the organism is concerned (though not as far as the cave is concerned): isn’t it arbitrary to introduce a sharp line between the cave and other features of a bat’s existence? The cave is in effect a tool the bat uses in order to survive, as the shell is a tool that the mollusk uses in order to survive. The same is true of burrows, dens, crevices, tunnels, and so on. These are all functional survival instruments, like organs of the body. By the same reasoning beehives and ant nests are part of the extended body of these creatures: they can detach themselves from these body parts, but that proves nothing—the same is true of hermit crabs. The bee and its hive function together to enable the bee to survive, just like its other organs; thus the hive is an organ of the bee’s extended body. Suppose the bee always took a mini hive with it whenever it left the big hive as protection and was never parted from its mini hive: wouldn’t we then naturally reckon it to the bee’s extended body? But the big hive is really no different, just more stationary. So the extended body can extend outward to spatially more remote locations. There is no conceptual problem with this: a deer might remove its antlers and leave them at home while going on a peace mission without thereby rendering them no longer part of its body (it reinserts them when it gets home).

            Can we extend the body even further? What about an amphibian that carries its own water supply around with it? It produces a membrane that traps water around its body. This could be useful in the event of excessive evaporation. Wouldn’t this tank of water be functioning as an organ of the animal’s extended body? What about a true super-organism that could drag stars between galaxies as it hops around the universe, using them as sources of heat? Wouldn’t these stars be part of its extended body? It might actually swallow stars and keep them burning to bring a little warmth to the intergalactic trips. Could we even maintain that the extended body of an organism includes everything about its niche? Air for birds, water for fish, land for terrestrial animals: the extended body merges with nature as a whole. Take humans: we have conquered so much of nature, using it for our own purposes, converting it into tools (clothes, homes, motorways)—isn’t this all the extended human body? Where does the human body stop and the human environment begin? Once we accept that clothes are extensions of our body, where does it end? What belongs to the body is what the organism uses to achieve its biological purposes, and that includes a great deal. There is no clear theoretical distinction between internal organs like the heart and kidneys and external organs like shells, clothes, caves, and so on. From the point of view of biology the old distinction between body and environment is misguided and unnecessary. This means that anatomy can also be extended: the carapace is part of anatomy, but so is the shell, so is the spider web, and so is the cave or burrow. To our visual sense there seems to be a clear distinction between body and environment, because we naturally segment the world; but from a conceptual point of view the distinction is insignificant. The theoretical body is whatever web of things serves to enable the survival of the organism. The notion of the organism thus undergoes extension: the organism is the totality of things that are selected for or against (this is the extended phenotype). The web and the spider are co-selected, forming a whole—spider-combined-with-web. This extended body has various parts or organs, including legs, eyes, and web; the restricted body is just another part of the extended body. Spider anatomy must include web anatomy. The proper unit of biology is the extended body. If we spoke of the “corporeal make-up” of an organism, instead of its body, we would say that the corporeal make-up of a spider includes the web. To put it differently, the survival of the genes depends on the whole complex not just on the properties of the restricted body. The “survival machine” inside which the genes sit extends out to every functionally relevant thing. The genes don’t care about the distinction between two types of body, the extended body being all that matters to them.

            Perhaps we don’t tend to think this way intuitively because we think of organisms as resembling ordinary physical objects, which don’t have functions and are not selected for. A rock is a discrete bounded object with a well-defined environment. There are no extended physical objects in the sense intended here; “bodies” in this sense are localized. They don’t have functional shells that aid in the struggle for survival. Physics doesn’t have to deal with the extended body. But biology is the science of functional units subject to natural selection, so its ontology is constituted differently. If we think of biological bodies as on a par with physical bodies, then we will be inclined not to see that the extended body is the right way to carve things up. There is a discrete physical thing that corresponds to the spider’s restricted body, and it is different from the physical thing that is the web; but that doesn’t mean that the biological body is so divided—there is a biological unit that unites these two physical objects. I have been calling this the extended body, but from a theoretical perspective we could just call it “the body” and then define more restricted units such as the fleshy body. The embodiment of species consists of spiders and webs, oysters and shells, bees and hives, bats and caves, people and technology, and so on. Each pair is an operative biological unit. We need to be more holistic about biological ontology, in contrast to the atomism of physics.

 

  [1] Much the same could be said of cocoons: are they part of the body or not? The best thing to say is that the question has no clear answer because we have no definite notion of what counts as the body. We do better to distinguish different types of body corresponding to the same organism: thus the cocoon is not part of the butterfly’s flying body but is part of the butterfly’s metamorphosis body. The most theoretically useful concept of body would include the cocoon as part of the extended body. 

Share

Explanations of Life

 

 

 

Explanations of Life

 

 

Suppose we encounter life forms on another planet unrelated to ours and possibly quite unlike ours. Still, there is evident adaptive complexity, so that the laws of physics and chance cannot explain what we observe. What possible explanation might be given for this complexity? How might it have come to be?

            One possibility is intelligent design—not by God, to be sure, but by scientifically advanced aliens. These organisms might have been synthesized on a Life Production Machine. They are in effect artifacts of another civilization, so the explanation of their existence matches the explanation for the existence of artifacts in our civilization: intentional intelligent design. We can’t rule this explanation out; it is a matter of empirical fact whether it is true (just as it is for life on earth). We might well gather further information that rules out the hypothesis (there is no such advanced civilization in the vicinity), but as a matter of principle the hypothesis is a theoretical possibility—it cannot be excluded a priori. Alternatively, the life forms might have arisen by ordinary natural selection with no intelligent intervention. But there are also mixed cases: the organisms might have been subjected to guided breeding after a period of natural evolution, or they might be genetically engineered and then left to natural selection. Conceivably they might be selectively bred from an initial batch of bacteria, so partly the result of natural design and partly of intelligent design. There is an indefinite range of possible combinations of natural evolution and guided evolution, varying between species and planetary fauna—for instance, the mammals have been left to natural selection while the reptiles have been intensively bred for intelligence or strength. Maybe elsewhere in the universe all the possibilities have been tried—as is partially the case on earth where humans have artificially bred certain species but not others.

The traditional theoretical dichotomy between intelligent design and natural selection may be quite parochial where advanced civilizations have developed, because there is ample scope for partial intervention into the process of generating life. Selective breeding and genetic engineering can certainly speed up the evolutionary process considerably, taking decades to achieve what natural selection would take millions of years to achieve. When intelligent life forms take evolution into their own hands the sky is the limit. Naturally evolved life might be the most primitive form of life, vastly outclassed by the kind of life created by life itself, i.e. designed by life forms with the intelligence to change the course of evolution. No need to wait for that lucky chance mutation; just create whatever mutation looks promising and then subject the result to rigorous test. Just as bacteria look very primitive in the light of later evolutionary developments, so naturally evolved life might look very primitive compared to the kind of life that intelligent designers can contrive. If the secret to the origin of life is ever discovered, it could be used to re-start the entire process, producing untold wonders by creative intervention. All of life could come to be intelligently designed.

            Interestingly, the possibility of intelligent design depends upon antecedent natural design: not every life form in history could be the result of intelligent design, since an intelligent life form has to come from somewhere. No universe could create intelligent life ab initio: the long and painful process of natural selection has to create the first form of intelligence, since intelligence cannot depend upon other intelligence all the way down. But once a form of intelligence has evolved that is capable of selective breeding and guided evolution, it can produce new life forms without reliance on the old machinery of blind random mutation and natural selection. Then the explanation for the design of organisms will involve intelligent design not natural design. Most of the life in the universe might be of this kind: whole galaxies could be inhabited by intelligently designed organisms. Geological time is vast but cosmological time is much vaster, so the possibility of intelligently designed life coming to dominate the universe can’t be ruled out. We might be just at the beginning of the history of life—the short initial period in which life evolved naturally. Already we are beginning to change the course of evolution; genetic engineering could accelerate this process enormously. Other intelligent species elsewhere might be much further along in imposing their will on nature.

If a Charles Darwin is born on a planet that has been subject to intelligent design, he will hit upon the correct theory of evolution for that planet, namely evolution by intelligent design.  [1] Maybe life was seeded naturally by the accidental arrival of bacteria, but then intelligent creatures stepped in to guide the course of evolution, creating whatever organisms took their fancy. A rival theorist who hypothesized natural selection as the explanation would be mistaken; there was, on this planet, an intelligent designer responsible for the adaptive complexity on display. Natural evolution could have ended millions of years ago, with all life now the result of intentional intervention. The traditional Darwinian theory used to be true, but it is no more: everything is now carefully monitored and cultivated. This is what is taught in biology classes these days, and it is entirely correct. All genetic alteration is brought about by scientific intervention, so that nothing is left to chance; then certain strains are chosen for reproduction and others rejected. It is as if the old religious creationist story were true, only it is not a divine being calling the shots but a super-alien. On our planet now Darwin’s theory is the true theory, but on other planets the theory of intelligent design may be the true theory (and may come to be the true theory on our planet). There might come a time when none of the species inhabiting the galaxies evolved by natural selection. That was just the early phase in the history of life, and destined to be superseded by intelligent design. Evolution will cease to be blind.  [2]                                                                                         

 

  [1] His book On the Origin of Species defends the view that all life results from the intentional actions of a mighty intelligent designer. This Darwin might not know the identity of the designer—that was not discovered until space travel became a possibility centuries later—but he was brilliant enough to see that no other explanation could be true given the facts. Organisms were just too well designed for this to be a matter of blind variation and mindless selection! He considered the alternative theory but found it wanting—and he was entirely right in his conclusions and reasoning.

  [2] Just to be scrupulously clear, this essay is not intended to provide succor for creationists about life as it evolved on planet Earth; I am speaking of imaginary plants and imaginary ways of shaping life.

Share

Essentially Negative Concepts

                                          Essentially Negative Concepts

 

 

Negation is a basic and ubiquitous element of our conceptual scheme, though it is hard to say anything illuminating about it. Beyond noting that it is a unary truth function we find little to report about the nature of negation. We feel vaguely that it consists in a certain kind of operation (it is described as an “operator”) and hence has an active character, akin perhaps to rejection, but otherwise it strikes us as elusive and puzzling. We wonder what it isexactly. However, there is no denying its central role in language and thought, and that is what I want to explore. My thesis will be that certain concepts are essentially defined by means of negation: this means that our grasp of these concepts embeds a grasp of negation. Negation enters into the analysis of certain concepts, so that these concepts could not exist without negation. In fact, negation crops up surprisingly often in conceptual analysis, structuring a wide variety of concepts (though not all), and sometimes where you would least expect it.

            Let me begin with a simple illustrative example: the concept of a bachelor. By definition a bachelor is an unmarried male—that is, a male who is not married. To grasp the concept bachelor you therefore have to grasp the concept of not being a certain way–a negative property. It is the same for spinster and the generic concept single: to be single is to be a person who has the property of not being married. A single person is someone who has not gone through the process of getting married, while a married person is someone who has gone through that process (so this is a positive property). Similarly for the concepts of commission and omission: to commit an act is something positive, while to omit an act is something negative—it is not to do something. Thus the concepts of bachelor and omission are essentially negative concepts, unlike the concepts of being married or a commission. When you employ these negative concepts in thought you are thinking negatively—about what has not happened. You are invoking negation in your mental representation of reality.  [1]

            The ubiquity of the negative is apparent in the prefixes and suffixes that decorate the lexicon. In addition to the versatile “non-”, we have “un-“, “dis-”, “ir-“, and “-less” (as in “non-existent”, “unease”, “dislike”, “irrelevant”, and “bottomless”). All of these are easily paraphrased in terms of “not” and clearly express negative concepts: if I assert, “I have a bottomless dislike of the irrelevant and non-existent”, I am having an attack of the negatives. We also have such words as “nothing”, “no-one”, “nowhere”, and “never”—negative quantifiers. We use these to talk about absences, lacks, and emptiness—as in “Colorless nothings never exist” or “Bachelors are never uneasy”. A sentence may contain no explicit occurrence of “not”, but the proposition expressed can be bristling with negativity (“It is notthe case that there is a time at which males that are not married are not at ease”). Negation occurs frequently, and sometimes unobtrusively, in our thoughts and utterances.

            But is any of this philosophically significant? Consider the concept of knowledge: it doesn’t on its face appear negative, but if we look more closely a negative element can be discerned, at least according to one standard analysis of the concept. Suppose we define knowledge as “non-accidentally true belief”: then what we have said, clearly, is that knowledge is true belief that is not true by accident—there must not be anything accidental in the way the belief was acquired. This is a natural response to Gettier cases: in addition to the positive conditions of truth, belief, and justification, we need an additional negative condition—that the belief not be true accidentally. Thus knowledge requires that something not be the case as well as that certain things be the case—knowledge is an essentially negative concept. It is like bachelor and not like married, like none and not like some. We might say that the concept is exclusionary in the sense that it rules something out—it insists that something not be the case. It tells us that knowledge must not be a certain way. Not so for belief: this concept does not stipulate that belief must be non-accidental or anything comparable—it is a positive concept. But knowledge requires that certain things notobtain–this is part of its analysis. Negation is internal to the concept of knowledge.

            Now consider intention: when you intend to bring something about you believe that it is not already the case. You intend to cut the grass, assuming that it has not already been cut. You can’t intend to do what you know has already been done. Thus intention presupposes a negative judgment—that the intended outcome is not already the case. The process of intending begins with knowledge of what is and is not the case—of what needs to be brought about and what doesn’t. Once the agent has determined what is not the case he can form an intention to make it the case. A doctor can intend to find a cure for cancer only on the assumption that cancer does not yet have a cure; once the cure is discovered the intention withers. The concept of intention is thus an essentially negative concept: intention essentially embeds a judgment with negative content.

            Perception also has negation at its heart. When you perceive something you are aware that it is not your perceiving of it: that tree I am seeing in the distance is experienced by me as not being part of my seeing. Perception involves a division into the I and the not-I (even when you are perceiving your own body, since your body is not part of your perceiving either). The object is apprehended as other, but that notion simply is the notion of what is not me. In general, intentionality embeds negation—mental states are directed to something not themselves. When I think of my absent brother I represent him as not being me; and even when I think of myself I represent myself as not being my mere act of thinking. To be conscious of something is to be aware of it as not that very act of consciousness.  [2]

            Think now of attitudes towards other people: when I interact with others I think of them as not me. They have minds like me, but their minds are not my mind. Other minds just are minds that are not mine. The problem of other minds is the problem of minds that are not my mind. So our attitudes here involve a negation, which serves to create the right conceptual gulf between oneself and others. All our psychological relations to others presuppose this basic negative judgment: love, hate, fear, sympathy, or indifference. It is because I judge that you are not me that I relate to you in the way I do. Your pain is not my pain, and that is very evident to me. You belong to the great world of that which is not myself—where everything is subject to my distancing negation. I experience reality under the capacious concept Not: the Not-I. Here negation permeates phenomenology, as well as language and thought.

There are many concepts we could consider as candidates for essential negativity, some more controversial than others; I will just mention some of these. The unconscious is what is not conscious; death is the state of not being alive; the future is what is not yet the case; the merely possible or counterfactual is what is not actual; the fictional is what is not factual; ignorance is not knowing; mystery is what is not known or knowable; refutation is showing something not to be the case; error is accepting what is not true; a fallacy is something not valid; an hallucination is a sensory state that is not veridical. I would say that all these are essentially negative concepts. And it is notable that they are all of philosophical interest (as are the other concepts I mentioned—knowledge, intention, perception, the conception of other minds). Are negative concepts characteristically philosophically interesting? Consider the very general and abstract concepts of identity, set, and entailment—all of great philosophical interest. Identity is the relation a thing has to itself and not to anything not that thing (to paraphrase Frege): here we have a double occurrence of negation in the definition. When we think of an object as self-identical we think of it as also not identical to other objects: negation enters conspicuously into our thoughts of identity. Identity, difference, and negation are tightly connected concepts. In the case of sets, we define a set as a collection of some objects and not others: the set of tigers includes all tigers, but it excludes elephants and lions (and indeed anything other than tigers). To be a member of a set an object has to meet a certain condition; anything not meeting that condition fails of membership. So thoughts of sets include thoughts of objects not in those sets—a set is something that rejects certain objects as members. Thus the concept incorporates a negative component—rather like an exclusive club (“Members only”—that is, no non-members allowed). Entailment likewise has an exclusive dimension: a proposition p entails a proposition q but not a proposition r. When we grasp the entailments of a given proposition we grasp something selective: only these propositions are entailments, not all the other propositions that populate logical space. When I survey the entailments of a proposition I recognize what follows and what does not follow—and the latter might not be obvious at first sight. Knowing what does not follow is as important as knowing what does follow—recognizing invalid inferences is as essential to logical understanding as recognizing valid inferences. Again, negation is implicated in our grasp of the concept of entailment. Logic is all about negativity: this follows, but not that. Thus we are thinking negative thoughts whenever we think of identity, sets, or entailment—we have negation on our mind.

            A particularly interesting candidate for essential negativity is semantic concepts: truth, falsity, satisfaction, and denotation. It doesn’t take much argument to establish that falsity is bound up with negation: a proposition p is false if and only if not-p. We can even define falsity in terms of negation in Tarski’s style, by simply placing on the right hand side of the biconditional the negation of the sentence mentioned on the left—“Snow is white” is false if and only if it’s not the case that snow is white. The work we do with “false” we could do with “not”. This is not strictly a disquotational theory of falsity, since we don’t just drop the falsity predicate in favor of what it is predicated of; but it is a natural counterpart to the disquotational theory of truth. We could call it “the negational theory of falsity”. According to this theory, falsity is negation, more or less—falsity is what is not so. And what happens if we negate falsity? We get truth—what is not not so, i.e. what is so. The double negation of p entails p. Truth is equivalent to double negation. One might even venture to suggest that double negation provides an analysis of truth, an account of the concept.  [3] Certainly anyone who grasps the concept of truth will understand the equivalence of “it is true that p” and “it is not the case that not-p”. So truth is bound up with negation, as much as falsity is; the three concepts hang intimately together. Truth then is an essentially negative concept in the sense that negation enters its analysis; indeed, it enters twice. Truth amounts to a double dose of negation—negation negated.

            Satisfaction follows much the same pattern, being “true of” by another name. When an object satisfies a predicate we can say that it doesn’t not meet a certain condition: x satisfies “white” if and only if it’s not the case that x is not white. Again, this is something anyone who grasps predication (satisfaction) understands—the connection to negation is implicit. So this semantic concept also alludes to negation. When I grasp that an object satisfies the predicate “white” I grasp that the possibility that the object is not white is ruled out—that is, I can reject the proposition that the object is not white. Negation forms the background to my understanding of satisfaction—the family of concepts I bring to bear. Thus “true of” is subject to the same double negation construal as “true”. Even if we decline to analyze satisfaction and truth by means of double negation, we must still accept the conceptual links between these concepts.

            What about denotation—does negation also insert itself into the concept of denotation? The following seems like a true thing to say: “Hesperus” denotes Phosphorus if and only if “Hesperus is identical to Phosphorus” is true. Generally: a name “a” denotes an object x if and only if “a = x” is true. Here we have identity employed to define denotation. There is no overt use of negation in this formulation, but negation hovers close by, in the concept of identity. First, to be identical is to be not different from, i.e. it is the negation of difference. Anyone who understands identity understands this: difference is non-identity and identity is non-difference—what could be plainer? Second, as noted earlier, identity leads us to negation via Frege’s dictum that identity is the relation a thing has to itself and to no other thing—with that double use of negation. We grasp the rightness of Frege’s words and negation crops up twice in those words. So the identity clause for denotation leads us quickly to infusions of negation. Denotation falls into line with truth and satisfaction in being an essentially negative concept. Not that the concept is itself a negative concept; rather, it contains negation essentially. This is surely a striking fact: the central semantic concepts are steeped in negation. Negation is in their bones.

            It would be nice to have a general theory of why some concepts are essentially negative and some are not—is there something they all have in common? It would also be nice to have a better understanding of negation itself—what it is, how it arose in human thought, how the concept functions in relation to other concepts. We can say with some confidence that it is not a family resemblance concept, that it is univocal and topic-neutral, and that it is in some sense logical. It also seems to belong in a class of its own, like no other concept.  [4] But beyond that negation is hard to pin down, despite its familiarity.

 

  [1] There is a weak sense in which negation may be said to figure in the mastery of every concept, namely that in grasping any concept we also grasp what it would be for it not to apply. To grasp F you must grasp not-F. But this is not the thesis I am defending; I am defending the thesis that certain concepts—by no means all—implicitly contain negation as part of their analysis. We need to invoke negation to give the content of F itself, not its complementary concept not-F.

  [2] Anyone familiar with Sartre’s Being and Nothingness will recognize this connection: negation is what Sartre calls a “constitutive structure of the for-itself”. Consciousness is nothingness for Sartre; so the Sartrean concept of consciousness is an essentially negative concept in my sense.

  [3] I discuss this in “A Negative Definition of Truth”.

  [4] In logic negation is grouped along with “and” and “or” as truth-functional connectives, but its uniqueness is clear: it doesn’t connect propositions; it reverses them. It turns a proposition on its head, converting it into its exact opposite. There is something aggressive and destructive about negation: it doesn’t so much create a new proposition by combination as annihilate the proposition on which it acts. In speech acts, “not” often functions as a device of rejection or prohibition. Is it too much to link negation with death (“To be or not to be”)?

Share

Epistemology Personalized

                                                Epistemology Personalized

 

 

What is it that is justified? Not propositions: they are true or false, but it would be strange to say that a proposition is justified independently of anyone believing it to be true (or probable). Was the heliocentric theory justified before anyone had any beliefs about it? It was true, but it wasn’t justified. So is it beliefs that are justified—or claims or statements? This is the way people usually talk in epistemology (“knowledge is true justified belief”, “the belief that ghosts exist has no justification”), but it is subtly wrong. This kind of talk makes it sound as if the relation of justification holds between evidence and beliefs without any mediation by a rational subject—as if the subject plays no essential role in the epistemic set-up. Evidence justifies belief and that is all. The belief can be justified or not, rational or not, independently of the epistemic subject. Evidence transmits justification to belief directly. The justification relation holds between evidence and belief.

            But that is not how we normally talk about justification. The canonical form of an ascription of justification is, “S is justified in believing that p”. It is the subject that is justified not the belief he or she forms. At any rate, the primary bearer of justification is the rational subject, with the belief’s justification following as a secondary and derivative matter. Just as it is the agent that is justified in acting in certain ways, so it is the agent that is justified in holding certain beliefs. You can see this from the fact that if you say of someone, “Her belief that p is justified” this could be true even though she is not justified in believing that p. The belief that p might be justified in virtue of evidence possessed by other subjects, while this subject has formed the belief irrationally. In order to determine if her belief is justified, in the intended sense, we need to ask whether she is justified in holding that belief. Generally, to say that the belief that p is justified is shorthand for saying that there are people who are justified in believing that p. When people are justified in holding certain beliefs we can say that those beliefs are justified, but not otherwise. It is not the belief state itself that is justified; it is a subject’s holding that belief. Similarly, it is not the assertion itself that is justified when someone makes an assertion; it is the speaker’s making that assertion. We cannot abstract the subject away from the relation of justification, as if justification merely involves relations between evidence and mental states or acts of speech. Justification is a triadic relation between evidence, belief, and person, not a dyadic relation between evidence and belief.

            Thus the person enters epistemology essentially. The basic thing that is justified is the person: I am justified in believing certain things. What is it that is justified? Me, you, him, her—we are justified in how we have formed our beliefs. There is something odd in the idea that a belief state itself could be justified—just as it is odd to suppose that an action, construed as a bodily movement, could be justified. That is like supposing that a state or act could be reasonable or rational; people are reasonable or rational. Compare: words don’t refer—people do. If I am rational in forming a belief, then we can say derivatively that my belief is rational, but we must not forget that such rationality flows from the epistemic subject. It is the self that rational or irrational, reasonable or unreasonable, justified or unjustified. We can say derivatively that an assertion is justified too, construed as a vocal utterance, but it would be strange to say that a pattern of sound is the primary bearer of justification, not the subject who produces that sound. The canonical form of ascription here is “S is justified in asserting that p”. This is why I am responsible if I lapse from good epistemic practice—not my speech act or my belief. The critic will say to me, “Youwere not justified in believing or asserting that p”. In such a case I need to be more careful, not my beliefs or assertions. It would be a category mistake to criticize my mental states or speech acts—they are not the delinquents.

            This has a bearing on how we talk about verification and falsification in the sciences. It is not that theories, construed as sets of propositions or sentences, are verified or falsified, as if this can take place in a personal vacuum; rather, people are justified in believing certain theories, given the evidence, or in believing the negation of those theories. To say that a theory has been falsified is just to say that people are justified in believing the negation of that theory; theories don’t get to be falsified independently of mental acts by persons. It is not that evidence somehow confronts theories by itself and renders them verified or falsified; the evidence has to go via persons who evaluate that evidence. Scientific theories are verified or falsified only because scientists have justifiably formed various beliefs about them. That is the structure of an epistemic fact: E is evidence for T if and only if E provides a justification for believing T in subjects S. Hence epistemology must be personalized: it cannot just deal in sense data and beliefs or sensory stimuli and behavioral assent—it must recognize persons as indispensable components in the process of justification.  Foundationalism, for example, must be formulated as the doctrine that all justification depends on rational agents being justified in holding a set of foundational beliefs; and coherentism must be formulated as the doctrine that justification consists of rational agents forming their beliefs in a coherent manner. It is not a matter of beliefs per se having solid foundations or cohering with other beliefs; it is a matter of persons basing their beliefs on a foundation or ensuring that their belief systems are coherent. This is the right way to think about the structure of justification—not as something abstracted away from persons. Similarly, when considering skepticism we should ask whether people are justified in believing in the external world or other minds, not whether the corresponding propositions are justified or even whether such beliefs are justified. The skeptical question is whether I am justified in believing in the external world or other minds: am I being rational in holding commonsense beliefs of these kinds? If I am not, that is a fault of mine. The skeptic is criticizing me—I am the one failing to live up to the requirements of rationality. It isn’t that formulating the questions in this way helps to solve them—it may even make them more difficult—but this is conceptually the correct way to do it. Wherever there is justification there is a subject justified.  Epistemology may or may not be naturalized, but it should it personalized.  [1]

 

  [1] Why has the self not figured more prominently in modern epistemology? I reckon it is because of an assumed empiricism about the nexus of justification: the self has not been amenable to empiricist treatment and is generally regarded with suspicion by empiricists, beginning with Hume. When empiricism began to take a more materialistic form, spearheaded by Quine, the self was even less on the list of approved entities—especially the self as conscious, reflective, and norm-sensitive. Justification began to be conceived as a mere triggering of internal states by physical stimuli, not as a person rationally evaluating evidence. Evidence must be received by an epistemic subject and then used by that subject as a justification for belief or assertion; it doesn’t just feed directly into belief or assertion.

Share