Immaterial Darwinism

                                               

 

 

Immaterial Darwinism

 

 

Consider the following imaginary world (I say “imaginary” not “possible” because I doubt this world is really metaphysically possible). There is a range of disembodied minds divided into different kinds in this world, analogous to animal species, numbering in the millions. There are also differences among the individuals belonging to each kind. These individuals can and do reproduce—they have children. They can also die, sometimes before reproducing. Our imaginary beings are completely immaterial and hence have no molecular parts. About these beings we can ask an origin question: how did they come to exist? One possible answer would be that they were created by another disembodied being, vastly superior to them, some 6,000 years ago. There was never any transformation of one species of mind into another; each species was created separately by an all-powerful God.

            But there is another possible origin theory: the theory that the disembodied minds evolved by natural selection from relatively primitive origins. This theory postulates that there were once, billions of years ago, very simple disembodied minds, and these minds evolved by natural selection into the minds that we see today. How did this happen? When the minds reproduce, making copies of themselves, in the form of offspring, errors can be made—the copying isn’t always perfect. When an error occurs the offspring differs slightly from the parent, since the error is not corrected. The error produces a variation in the properties of the mind that is produced—say, we get a mind with a slightly higher IQ or a reduction of affect. Natural selection then operates to favor or disfavor the change, which then gets passed to the next generation, or fails to. These selected changes accumulate over long time periods, producing varieties of mind. Competition for reproductive mates gives further bite to natural selection, so that traits are favored that increase the probability of mating, and hence producing copies. In other words, we have random variation, self-replication, and natural selection operating together to generate the immaterial beings that exist in our imaginary world. There are no genes, no bodies, and no physical processes of any kind—but there is evolution by natural selection.

            The lesson of this little thought experiment is that the basic explanatory scheme of Darwinian explanation is not essentially materialist. As things exist in our world, animals have material bodies, material genes, and material behavior: the mechanism of random variation, reproduction, and natural selection applies to material entities. But the mechanism itself is topic-neutral: it is sufficiently abstract to apply even to immaterial beings—so long as the basic conditions of variation, copying, and natural selection apply. Just as it is possible to run an evolutionary program on a computer, producing more complex patterns from simpler ones by random variation and natural selection, so it is possible to conceive a world that runs by Darwinian principles but is quite immaterial. Spirits could evolve by random mutation and natural selection, so far as the theory is concerned. Nothing in the theory itself entails that it applies only to material entities. Even gods could be subject to Darwinian evolution. How the abstract principles are implemented in different kinds of being differs from case to case, but the principles themselves are ontologically neutral.

            Thus it is logically conceivable for a dualist like Descartes to be a Darwinian. On the one hand, the animal body evolves by material natural selection in the standard way, involving DNA. On the other hand, the immaterial mind itself evolves on a parallel but separate path: it is subject to internal changes (“mutations”) that can be passed on to the next mind, assuming that there is a parallel mechanism to the genetic one; and these changes can be selected for or against. As the body creates copies of itself using DNA, so the mind creates copies of itself using whatever immaterial resources it possesses. We have two-track Darwinian evolution to match the dualist ontology. No doubt no such thing happens in the actual world, but we can imagine a world in which body and mind, conceived as separate substances, evolve in parallel, both subject to Darwinian principles. So you can consistently be a Darwinian anti-creationist while also accepting Cartesian dualism, or even Berkleyan idealism. The logic of Darwinian explanation is neutral between metaphysical systems.

 

C

Share

Identity and Difference

 

 

 

Identity and Difference

 

 

Identity has been found problematic; difference, not so much. Identity has been judged a pseudo relation, but no one doubts that difference is a genuine relation between things. We can observe that one thing is different from another, but can we observe that a thing is identical to itself? Difference is essential to counting, but identity never gets us beyond a single entity. Do we really need the concept of identity? As a matter of definition, identity is said to be the relation a thing has to itself and to no other thing; difference is the relation a thing has to everything apart from itself. Everything is either identical to a given thing or different from it. Each thing is identical to itself and different from everything else. What exactly is the logical relationship between the concept of identity and the concept of difference? It tends to be assumed that identity is basic and difference is derivative—difference is simply non-identity or lack of identity—but what about considering it the other way round? What if we take difference as basic?

            If difference is basic, then identity is simply non-difference or lack of difference. We can express the sentence “a is identical to b” by the sentence “a is not different from b”. We start out with the concept of difference, tied to our perception of distinct objects, and then we define identity as simply the opposite of difference: it is the relation a thing x has to a thing y when x and y are not different things. We might initially suppose that everything in the world is different from everything else, but then we make a conceptual discovery and realize that objects are not different from themselves—there is another relation apart from difference, namely identity. So identity is really the absence of difference. We thought that Hesperus was different from Phosphorus, but it turns out that the two are not different—this was an illusory difference. We could express our discovery by saying, “Hesperus and Phosphorus are not different”, but we choose to introduce a shorter form of words and say, “Hesperus and Phosphorus are identical”. We don’t thereby expand expressive power; we had already said what needed to be said by saying that the two are not different.

            Thus we might offer to analyze identity in terms of difference plus negation: “identical” means “not different”. Why is this any less correct than analyzing “different” as “not identical”? In fact, it looks as if difference is more primitive in our system of concepts. Animals and young children surely make judgments of difference, but do they make judgments of identity? Someone could in principle have the concept of difference and never hit on the concept of identity, which would require conjoining difference with negation. But how could someone have the concept of identity and not have the concept of difference? Identity is the absence of difference: it is the relation a thing has to what it is not different from. I realized I was different from everyone else at an early age; it was only later that it dawned on me that I was identical to myself (funny thought). Difference is a given, a datum; but identity is more of a construction or abstraction. Identity is a sophisticated concept; difference is as plain as the nose on your face. A farmer counting his chickens needs the concept of difference; the philosopher explores the concept of identity. To reach the concept of identity you need to combine difference with negation—not a trivial operation.

            The case might be compared to truth and falsehood. Philosophers tend to concentrate on truth, leaving falsehood to take care of itself, but a good case can be made that truth is definable in terms of falsehood and negation.  [1] Thus for a proposition to be true is for it not to be false. This defines truth in terms of falsehood and negation. Similarly, we can define identity as difference plus negation—as not being different. This is a genuine definition and it provides necessary and sufficient conditions. First we master the concept of difference; later we form the concept of identity by combining difference with negation. Identity is what holds when difference doesn’t.

            Nothing in the standard logic of identity will be sacrificed by adopting this position. We will still have reflexivity, symmetry, and transitivity.  We can still distinguish numerical and qualitative identity: some things are numerically different without being qualitatively different, or not different qualitatively while different numerically. Leibniz’ Law can simply be reformulated to read, “If a is not different from b, then a and b have all their properties in common”. It might be a good idea to reform the symbolism we use to express claims of identity and difference, because the standard symbolism makes identity out to be basic with difference coming out as the negation of identity. Thus we now have “=” and that symbol with a slash through it, or modified by “not”. Instead we could have a new sign for difference (say “^”), taken as primitive, and then introduce identity by means of negation. We will then write “Hesperus not-^Phosphorus” to express the fact that Hesperus is identical to Phosphorus, i.e. not different from Phosphorus. We can then add this sign to the usual symbols of the predicate calculus instead of “=”; formulas will then include “a ^ b” and the like. Obviously, this will be equivalent to taking “=” as primitive and defining difference by means of it and negation. But the new formulation is conceptually more perspicuous in the light of the proper order of definition.

            Understanding difference is fundamental to every cognitive relationship to the world and is present in every perception. We see difference everywhere we look. The world is packed with difference. Identity is the exception to this universal rule—not everything is distinct from everything. There is the odd case of things in relation to themselves: here there is not difference. Things assert their difference from other things, but not from themselves. This lack of self-difference is what we name “identity”.

 

  [1] See my “A Negative Definition of Truth”.

Share

I’m Free

                                               

 

 

I’m Free

 

Philosophy has saddled itself with the phrase “free will”, asking such questions as whether free will is possible, whether it is compatible with determinism (and indeterminism), and what its nature is. Is it conceivable that the phrase itself is responsible for the seeming intractability of the problem? Consider the sentence “I have free will”: it suggests that I possess a specific attribute or faculty whose name is “free will” (or “freedom of the will”). But can we not paraphrase the sentence as follows: “I’m free in the exercise of my will”? Here we predicate freedom of a person while speaking of that person’s will (we could also say, “I’m free in the way I act”). Those experts on the subject of free will The Rolling Stones have the following lines in their song “I’m Free”: “I’m free to do what I want any old time” and “I’m free to choose what I please any old time”. They predicate freedom of a person (not of his or her will) and talk about wanting and pleasing—any old time. Maybe if we focus on this style of freedom talk we will see things more clearly.

            We have other phrases of the form “free F” where F is a type of thing other than a person: “free speech”, “free assembly”, “free thought”, “free love”. Take free speech: are we to suppose that there is a special attribute or faculty named “free speech” which people possess? Surely attributions of free speech mean something like this: “I’m free so far as my speech is concerned”; or (following the Stones) “I’m free to say what I want any old time”. I’m free to say what I want because no one is preventing me saying what I want—no one is interfering with my speech desires, making me say things I don’t want to or stopping me saying things I do want to. Possibly, too, I’m free of interference from within myself in the form of verbal compulsions or tics or some such: I can say what I reallywant, not what some disruptive inner demon makes me say. This is the familiar idea of freedom from constraint or interference. Would anyone wish to say there is a metaphysical problem about free speech?  Would anyone wonder whether free speech is compatible with determinism? Would it nullify freedom of speech if acts of speech were lawfully caused by the speaker’s desires? I don’t think so. One might have philosophical issues about speech, even quite deep issues, but there is no particular problem about what freedom in such a context amounts to. Language and its use are profound and difficult topics, but freedom of expression is just a matter of being free from certain constraints (not including causation by desire). If a philosopher were to puzzle herself by wondering what the nature of the faculty of “free speech” consists in, postulating special kinds of causation or no causation at all, we would think she had got herself onto the wrong track—we might even reach for the dreaded phrase “pseudo problem”. There are not two types of speech, the free kind and the unfree kind, considered independently of the question of outside (or inside) interference—with the free kind said to belong to mature humans and the unfree kind to belong to infants and animals. The vocal performances of whales, dolphins, and children are no more lacking in freedom of speech than ordinary adult human utterances—there is no special faculty named “free speech” that they lack and we possess.  [1]

            A suggestion thus asserts itself: so-called free will is just the sum of all such individual freedoms—all the things an agent is free to do because she is free from certain constraints. There is no more to the notion than that, and this is apparent from the paraphrase in terms of what a person (or other agent) is free to do. You don’t need some remarkable faculty called “free will” to be free; you just need to be able to act as you desire, please, see fit, approve, etc. Similarly there are not two types of love, the free kind and the unfree kind, considered as emotional states; there is just freedom from interference with respect to your love life. We should not reify these nominal phrases, positing a special entity with a puzzling nature. If all action were free from interference, we would not need the locution “free will”; we would simply speak of “the will” without modification. The will is trivially free insofar as it is not constrained: “free will” a pleonasm. The will doesn’t change according to whether the agent is free or not—with some agents having one kind of will and others another kind; talk of freedom is just a way to register the absence of interference. Thus there is nothing puzzling or mysterious about the freedom of the will, though doubtless there are puzzles and mysteries about the will per se.  [2] Animals are free to do what they want and choose what they please any old time (though we often take away their freedom). They don’t possess an inferior grade of will that lacks in the quality of freedom, any more than a human prisoner possesses such a degraded will. The phrase “free will” is a logically misleading expression, leading us to postulate a special kind of faculty with a distinctive inner nature—as it might be, decision without any antecedent desire. According to some views, free will requires indeterminism in decision-making, any other kind being deemed not genuinely free. Animals are thought to make only deterministic decisions governed by the laws of nature, while adult humans can transcend laws of nature and dabble in indeterminism. It is the supernatural soul that makes our will free, leaving animals (and certain humans) to languish in volitional bondage. But surely this is all mythology: animals have as much freedom as we have insofar as they can act without constraint according to their wishes. There is no sharp metaphysical dichotomy here. It may be that different species have different kinds of will, more or less sophisticated, but they don’t differ with respect to their freedom. Many animals may have more freedom than humans from the point of view of constraint.

            It would be bizarre to suggest that I lack freedom of speech because my speech acts are determined by what I want to say—that is precisely what freedom of speech is! Likewise it is bizarre to suggest that I am not a free agent because I always act according to my desires—indeed that my desires cause my actions. An individual is free if her actions are appropriately linked to her desires; there is no need to bring in a special mental faculty cryptically labeled “free will”. There is no such faculty; there is simply the faculty of will operating in varying circumstances, rendering the person free in some and not free in others. If I lose my freedom to act in a certain way, say by being imprisoned, I do not lose some kind of metaphysical essence, rather like losing consciousness; I merely cannot exercise my will freely, i.e. as I would wish to exercise it. Freedom is an entirely extraneous affair not a matter of the inner nature of a specific human faculty.  [3] There is really no such thing as “free will”, though humans are generally free.  [4] I am free, but it is a kind of category mistake to suppose that my will is free. For what kind of property is that—what is the attribute designated by “free” in “free will”? There are open doors and closed doors, but there are not two intrinsically different kinds of door—the open kind and the closed kind. Similarly, there are confined animals and free-range animals, but there are not two intrinsically different kinds of animal. There is no metaphysical puzzle about the nature and possibility of the faculty of freedom of range in contrast to confinement—the phrase “free-range” is not the name of a special faculty inherent in some animals but not in others. Likewise, there is no intrinsic difference between the operation of my will when I am externally constrained and its operation when I am choosing as I please—it is the same old will operating in different circumstances. To invoke a phrase of the British vernacular, to be free is not to be “buggered about”—made to depart from what one feels like doing.  [5]This has nothing to do with fancy faculties for flouting the laws of nature. When a man is released from captivity he may exclaim, “I’m free!” but he doesn’t announce that he has just got his faculty of free will back, as if he was just a mechanism while in prison. His will was always free, if we are to insist on talking that way, since he always had the power to act on his desires, even if he could not exercise that power for the duration. Nothing in his psychological make-up changed while imprisoned: he didn’t turn onto a mere machine when he walked through the prison gates.

To be sure, people can be more or less able to control their impulses, more or less able to regulate their desires, but this is not a matter of possessing a special faculty of free will in contrast to a volitional faculty lacking in freedom (“unfree will”): wills can differ in their nature without falling into one or other of these two artificial metaphysical categories. Certainly, we should not entertain such extravagant ideas as that volitional indeterminism can somehow emerge from determinism in evolution or individual development. There can be no such thing as an inherently unfree will; for every will, properly so called, has the possibility of desired action built into it. This is the natural home of the concept of freedom; the traditional phrase “free will” misleadingly suggests a metaphysical foundation for freedom that simply doesn’t exist.  [6]

 

  [1] This is not to deny deep differences between the two kinds of vocal performance—in particular, when it comes to stimulus-independence: but that is not a question of whether the performer is exercising his or her freedom of speech, i.e. speaking according to desire. We should not confuse what is called “stimulus-freedom” with freedom of speech.

  [2] I would cite the nature of mental causation as one such puzzle, along with the general mind-body problem.

  [3] That is, extraneous with respect to desire: but there can be internal factors, psychological and physical, which interfere with acting on one’s desires, such as compulsions and brain pathologies. The tendency to think of ordinary desires as somehow analogous to such disruptive internal factors is one source of confusion in discussions of free will.

  [4] I hope I am not being too optimistic here; you know what I mean.

  [5] I recall Kingsley Amis once remarking that his life hadn’t been all that bad because at least he hadn’t been buggered about too much in the course of it. He hadn’t had his freedom thwarted. He could do as he pleased, any old time.

  [6] This paper complements my paper “Freedom As Determination” which argues that freedom entailsdeterminism and is not merely compatible with it. The present paper diagnoses one source of resistance to that kind of position. This seems to be one of those rare instances in which philosophical confusion arises from a form of words—but words from philosophical language not ordinary language. The words “free will” look like a name of some attribute or faculty (compare “consciousness”), but when we examine how we use the word “free” that impression dissipates.

Share

Good, Evil, and War

                                               

 

 

 

 

Good, Evil, and War

 

 

It is easy to see how two evil states may go to war. They may have conflicting interests that they seek to settle by means of organized violence: for instance, they may have designs on each other’s territory or wealth. It is also easy to see how a good state and an evil state might go to war—as when the evil state attacks the good state and the good state defends itself. But it is not easy to see how two good states could be at war, since neither will engage in unjustified aggressive acts towards the other. They may have conflicting interests, but they will not seek to resolve these conflicts by armed combat. In war at least one party has to be an evil actor. So we can always infer from the existence of a war that one side at least is evil.

            It might be objected that this generalization (“the first law of war”) cannot be quite right as stated, since it is logically possible for two good states not to perceive each other as good. What if each state views the other as having evil intentions even when it does not? Will they not then be capable of war? It is quite true that a state can misperceive the moral standing of another state (“the Great Satan”, “the Evil Empire”), but in such a case war will not be the outcome, because the perception of evil will be addressed by peaceful means, i.e. diplomacy. The good state will endeavor to discover whether the perception of evil that it has of the other state is well grounded: that is, it will apply the principle that states are innocent until proven guilty—even if some prima facie evidence of guilt exists. Misunderstandings in human relations can occur, but a good state will seek to remedy such potential misunderstandings. Only if a state is not good will it allow mistaken impressions of evil to persist, leading to the possibility of war.

            There is another logically possible case to consider: two evil states that perceive each other as good. Will they go to war? They may, because the perception of goodness may not outweigh the evil ambitions of the state in question. Perceiving the other as virtuous is not usually sufficient to deter violent action against the other. In addition it is unlikely that an evil actor will openly admit the virtue of its target when pursuing its self-interest, whether person or state. That is why war is always accompanied by propaganda alleging the evil of the enemy.

            It follows that without evil there would be no war; to abolish war we simply need to abolish evil. War is certainly not an inevitable consequence of autonomous states with conflicting interests, even vital conflicts. History need not be the history of war. Abolishing evil isn’t easy, to put it mildly, but at least we now know how to prevent war, as a matter of principle. There is nothing inevitable about war.

            It might be wondered whether there can be such a thing as a virtuous state, given how large and complex states are. Are any states today virtuous states? If any are, they are certainly very few. But states can become more virtuous, and as they do so they become less likely to engage in war. On the other hand, vicious states will always be at war as part of their natural condition. The more virtuous a state is the less prone is it to war. If we are interested in abolishing war, we should work to promote national virtue at the political level. Decreasing the hatred of foreigners will be part of that effort, but many other things will be involved.

            These banal points apply to a wider range of human interactions than wars between modern heavily armed nation states. The OED defines war as “a state of armed conflict between countries or groups within a country”. But the concept of war has a wider application, as in the “war of the sexes” or the “war on terror” or “the war on drugs”. There are wars between families, neighbors, religions, races, and relatives. Not all of these wars involve bombs and guns: some are waged with words or discrimination or snobbery or laws or sanctions. In all these cases, however, the same basic principle applies: there must be at least one evil actor. It may not always be clear who the evil actor is, since both may be engaged in violent acts—though some of these may be just acts of self-defense. But a pair of virtuous actors can never be at war; someone has to be culpable.

            We don’t want to count boxers, brawlers, and duelists as engaged in war, even though they are engaged in violent conflict. To be at war implies something in the way of long-term strategy and purpose: it takes more than one battle to make a war. The rules of combat are also much less fixed in war, where the actors typically deem it acceptable to use whatever methods they have available to ensure victory. This is why rules of war have been introduced, though they are continually flouted. The natural condition of war is lawless ruthlessness. Paradoxically, the more ethical codes of war become, the more likely it is that actors will engage in war, since war will not be prosecuted in the complete absence of ethical restraint. A nation at war will not accept defeat if it can win by breaking a few rules, though a typical boxer will concede defeat even if he could have won by fouling. In the latter kind of case both parties may be virtuous actors, consenting to the violence that occurs. Mass boxing matches are not wars. For a genuine war there has to be evil on at least one side.

            It is notable that the rhetoric of war always involves imputations of evil against the enemy. No one ever says they are going to war with X even though X is a thoroughly decent person, tribe, or country. War can only be justified by allegations of evil: military evil, religious evil, economic evil, etc. And a country at war always tries to elevate its moral self-image. This is because of the conceptual link between war and evil: war is necessarily what is prompted by evil. If two countries are equal in virtue and yet have conflicting interests, they will justify waging war by imputations of evil—never by acknowledgement of moral parity. Two countries that are morally comparable may justify a state of war by insisting that the other country embraces an evil ideology, even when there is no significant difference in human conditions in the two countries. War can never be conducted in full open awareness of the other side’s virtue. Even a war of plunder will be represented as a moral crusade.

            We will therefore not understand the nature of war if we try to represent it in morally neutral terms. Wars are not armed conflicts stemming from conflicting interests or divergent ideologies—or else good actors (on both sides) could be involved in wars. The concept of evil is essential to the concept of war: actual evil and perceived or putative evil. A war is not like a chess game or a game of tug of war or an arm-wrestling contest. Wars are not a subspecies of games. I would define a war as an organized purposeful armed conflict in which one or both parties are evil (bad, wicked) and are judged to be so. The evil may take different forms, as may the arms that are employed, but it will always involve morally unjustifiable actions. It is perfectly possible for one party to a war to be entirely blameless, in spite of its violent actions, but then the other party must be blameworthy. There cannot be entirely blameless wars. Virtuous agents can never be at war. Whenever a war is in progress one side or the other is guilty. This is why human wars are fundamentally different from group violence among animals (unless we suppose that some animals are capable of evil). War is the natural expression of human evil; it is not some kind of fact of nature or inherent political tendency. Great powers do not inevitably clash; they do so only because of specific identifiable acts of wrongdoing. War is not a historical or political inevitability but a choice to act on evil impulses. There are no “good wars” in the sense that virtuous nations can find themselves at war with each other. If two states find themselves heading toward war with each other, they should always ask where the moral fault lies, and never assume that war is unavoidable.

 

Share

Formal Languages and Natural Languages

                       

 

 

 

Formal Languages and Natural Languages

 

 

Philosophers of language have argued over the relationship between so-called formal languages and natural languages (the kind people regularly speak). Some say formal languages supply the underlying logical form of the sentences of natural languages; some say formal languages improve on natural languages; some say formal languages are technical inventions that distort natural languages; some say they reflect the innate language of thought unlike natural languages; some say they are pointless abstractions bearing no meaningful relationship to natural languages. The correct answer, however, is this: they are part of natural language. Every formula of a formal language can be read as a sentence of English (or any other natural language), though the resulting utterance may sound stilted (“there is something x such that for everything y…”). A so-called formal language is really a formal notation, where “formal” just means, “looks like mathematics”; the language is good old ordinary language, suitably configured. All we are doing is translating some strings of English words into other strings of English words, as when we replace (say) “Everyone loves someone” with “For every person there exists another person such that the first person loves the second person”. The symbols of the formal language are just invented signs for words we already have–for example, using a backwards upper case “E” to stand in for “there exists”. I take it this is completely obvious.  [1]

            But it makes a difference to how we view certain kinds of proposal. The theory of descriptions, say, is just the proposal that one kind of sentence of natural language containing “the” can be translated into another kind of sentence of natural language containing only words like “there is”, “for all”, and “uniquely” (itself translated by the word “identical” suitably positioned). We are using one part of natural language to paraphrase another part, invidiously preferring some sentences to others. This cannot be an improvement on natural language, since it isnatural language; we might just prefer one part of it to another for philosophical or other reasons. Nor can the preferred part be the underlying form of the other part: for how could one part of natural language be the underlying form of another part? Both are overt sentences of the language, neither “deep” nor “superficial”. We may describe one sentence as the analysis of another, but how could one sentence literally contain another? By the same token, the “formal” part could not be inferior to natural language, though it might not exhaust the full resources of natural language. There is no such thing as an opposition between “logical language” and “ordinary language”: both are just versions of natural language. All we can really talk about is whether one or other part of natural language is preferable for certain reasons or purposes. We can argue about the utility or perspicuity of certain notations, which are just abbreviations for natural language expressions; but we cannot argue about the relationship between natural languages and some other type of non-natural language.  [2] Everything we can say (or write) is part of natural language, which is why we can always convert a logical formula into familiar words of the vernacular. Thus a theory of truth for a formal language is in reality a theory of truth for one section of a natural language. A so-called formal language is not some sort of transcendent symbolic system standing outside of natural language. What logicians have done is simply invent a code for a certain part of the language they already speak. 

            The innate and universal language of thought can be externalized in different kinds of notational system—including English and Japanese, predicate calculus and modal logic—but these are all systems that express the same underlying cognitive structure. If we call this structure LANGUAGE, then what are called formal and natural languages are just different ways of externalizing LANGUAGE. And the formal (mathematical-looking) mode of externalization is really just a part of the natural-language mode of externalization. There is no opposition here, no rivalry, no better or worse. Principia Mathematica is actually a piece of ordinary English. We might say that logicians speak a certain dialect of their native language. A formal language is just so much (stilted) informal language; it is not something standing magnificently apart from the common language we all speak. We use different parts of our native language for different purposes, altering our vocabulary and style; a so-called logical language is just one such variation.

 

  [1] There exists the theoretical possibility of a formal language not expressible in the sentences of natural language: then the standard range of options regarding its relationship to natural language would be available. But that is not the situation in which we find ourselves, which is that logic texts simply consist of ordinary language written in novel orthography (variables, brackets, etc).

  [2] In the same way we can meaningfully talk about the relative merits of different human languages, say English and French, but this is clearly an intra natural language issue, not an issue about a natural language versus an unnatural language. In what way is a logical language unnatural? It is so only in the sense that it is visually unfamiliar and awkward to speak. We use the same linguistic competence in the logic classroom that we use in daily life.

Share

Forced Knowledge

                                               

 

 

 

Forced Knowledge

 

 

 

We are schooled in various dichotomies dividing up the field of human knowledge: a priori versus a posterioriknowledge, infallible versus fallible knowledge, implicit versus explicit knowledge, innate versus acquired knowledge, basic versus derived knowledge, and so on. These are all worthy of the attention of the epistemologist, and have duly received it. I propose to introduce a new dichotomy that has not been investigated, nor even recognized: the distinction between what I shall call forced knowledge and optional knowledge.  

            By optional knowledge I mean the kind that is acquired intentionally, or which can be so acquired. Scientific knowledge is a good example: procedures are undertaken that result in knowledge, and these procedures are followed voluntarily—experiments, observations, calculations, etc. Also commonsense knowledge: if I acquire knowledge about what is in the next room, by going in there to have a look, then I am acquiring optional knowledge. Optional knowledge is the kind you are free to acquire or not to acquire: you can choose knowledge or ignorance. You can open or close your eyes, block your ears or not, smell or refuse to, taste or decline to. Nothing compels you to know the things known by these methods. You can learn mathematics or not bother, study history or give it a miss, fill your head with geographical facts or remain a geographical ignoramus. Education consists of exercising the ability to gain optional knowledge; the will is involved, along with hard work and dedication. A great deal of knowledge is knowledge that you could have failed to have—knowledge that does not come with the territory, but enters by decision and action. You have the option of not knowing, though you may also choose to know.

            By forced knowledge I mean knowledge that you can’t help having, that you can’t avoid, that is not a matter of will. You have it whether you like or not; it is built into you, not brought to you. It is involuntary, inescapable, and automatic. The knowledge is forced upon you; you have no say in the matter. The most obvious example of forced knowledge is innate knowledge—knowledge you are born with, so not acquired intentionally. This is knowledge you cannot decline to possess. But that is not the most interesting example of the type: there is also knowledge of one’s own mind. You cannot decline to learn about, and acquire knowledge of, your current conscious inner states—you are condemned to know about these things. There is no escape, no avoidance, and no decision. Like it or not, you have to know about your own inner life; this is not something to which you can turn a blind eye or a deaf ear. It cannot be turned off or shut down or otherwise disrupted. It is self-intimating in the sense that it automatically, necessarily, registers on you: it imposes itself on you. You are, as it were, its victim.  [1]

            It is not the same with any unconscious mental states that you might have: these do not produce forced knowledge, since they are hidden from awareness. You can choose to know about your unconscious mind or choose not to. But you cannot choose to know about your conscious mind. This is not because the conscious mind is identical with knowledge of itself, so that knowledge comes with the conscious mind trivially. States of consciousness and knowledge of such states are distinct existences: pain, for example, is not identical with knowledge of pain. Still, there cannot be pain without knowledge of it; you cannot decide to know nothing more of your pain, as you can decide to know nothing more of someone else’s pain. There is, so to speak, no gap between consciousness and knowledge of it, such that that gap can be turned into ignorance—as there is a gap between my knowledge and your consciousness. The knowledge in my own case is immediate and therefore unavoidable.

            Among the objects of this type of forced knowledge we can distinguish four broad categories: sensations, thoughts, emotions, and meanings. We cannot avoid knowing about our sensory states and bodily sensations; we cannot avoid knowing what we are thinking; we cannot avoid knowledge of our emotions; and we cannot avoid knowing what we mean by our words. Each of these kinds of epistemic forcing has consequences for the nature of our psychological life: we must know what it is like for us perceptually at any given moment; we cannot shield ourselves from our own thoughts; we always know our state of emotional wellbeing; and we cannot fail to know what meaning we are trying to communicate. So we have to contend both with the conscious state itself as well as the distinct state of knowing about it. For example, we have both the emotion of depression and the knowledge that we are depressed: both contribute to our overall psychological state (similarly for joy and so on). In the case of meaning we must have a complex of communicative intentions plus self-knowledge with regard to those intentions–and these are inseparable. Thus we cannot help knowing what we mean by what we say: that is part of what meaning is. The theory of meaning must therefore acknowledge that meaning involves forced knowledge: the speaker must know what she means, even though the hearer may or may not know it. I can decide to find out what you mean, but I cannot decide to find out what I mean—I am condemned to semantic self-knowledge. Meaning is something such that the agent of it must know what she is agent of. No one can mean something and be in the dark about what she means.  [2]

            Forced knowledge is not the same as infallible knowledge. A person has infallible knowledge when her beliefs cannot fail to be true; a person has forced knowledge when she must have certain beliefs—that also happen to be true. In the case of forced knowledge, there isn’t the option of suspending belief, but in the case of infallible knowledge there is that option. Descartes teaches us that whenever we believe that we think we do think, but this is not the same as to say that we cannot help believing that we think. Similarly for existence: infallible knowledge of our existence is not the same as forced knowledge of our existence. It is true that the two tend to go together, since both characterize knowledge of the inner; but they are different concepts. I cannot avoid the knowledge that I exist—this is part of what it is for me to exist—and my belief that I exist cannot fail to be true. But I might be infallible about my existence without having it always before me. Descartes could have added to the Cogito: “I exist, therefore I know my existence”. I can avoid knowledge of the existence of others, simply by hiding away somewhere; but I cannot hide from my own existence—it is always evident to me. I am forced to know that I exist for as long as I (consciously) exist. I am also forced to know what I feel, think, and mean, whenever I do or undergo any of these things. I cannot shield myself from such knowledge, or lazily fail to pick it up, or simply turn my mind to other things. It is impossible for me to be ignorant about these things.

            We usually don’t like being forced into things; we value our freedom. We accordingly might resent epistemic bondage—why should I be forced to know things I would rather not know? I don’t want to know that I am depressed or angry or have compulsive thoughts or say mean things to people—but I am forced to know these things against my will. If someone offered me the chance to avoid such self-knowledge, I might well take it: my life might be happier that way. We avoid knowledge of the mental states of others where it is convenient, so why not avoid knowledge of our own mental states when it suits us? It would nice to be able to turn it on and off at will. That way we would increase our freedom. But this is a fantasy of freedom: it is a deep fact about the human condition that we are condemned to self-knowledge—as we are condemned to other-speculation. It is difficult to acquire knowledge of the minds of others, maybe impossible, but it is all too easy to acquire knowledge of one’s own mind: the former is distinctly optional, the latter utterly forced. We can glide over the minds of others or ignore them entirely; but we cannot avoid the reality of our own mind—we are compelled to know ourselves (in our conscious part). The oracle commanded, “Know thy self!” but in one sense the response must be, “How can I not?” The oracle might be interpreted to mean: “In addition to forced self-knowledge, there is also optional self-knowledge you really should try to obtain, concerning things that lie outside of your immediate awareness”. Perhaps the oracle was being partly ironic, since it must have been well aware of the inescapability of self-knowledge of a very ordinary kind.

            It would be nice to have a theory of why self-knowledge is forced: what is it about the conscious mind that compels knowledge of it? And do other animals have the same kind of non-optional knowledge? What kind of knowledge is it—how conceptual is it? Is it anything like perception? Is it causal? Is it reason-based? These are general questions about introspective knowledge, well recognized; we need to add the question of force—what makes introspective knowledge forced knowledge? My aim has been to identify the category and illustrate it with the example of self-knowledge.  [3]

 

  [1] Two other candidates for forced knowledge are logical knowledge and proprioceptive knowledge. A good case could be made that knowledge of basic logic is unavoidable: we are forced to recognize the validity of certain kinds of inference, since this is constitutive of being a thinker at all. I cannot choose to be ignorant of modus ponens, say. In the case of the proprioceptive sense, we cannot turn if off as we can the other senses: I can’t, at will, block my sensory access to the position of my body, as I can close my eyes or stop up my ears. I might be able to block proprioceptive knowledge by undertaking brain surgery or going to sleep, but I cannot in the normal course of events avoid this kind of knowledge. The concept of forced knowledge thus gathers a quite diverse group of knowledge types, not gathered by other epistemic notions. 

  [2] There is a question about whether we can choose to attend to our inner states, or not attend to them. I might choose not to attend to a mild pain in my arm—so I don’t suffer from forced attention in such a case. This doesn’t mean I can choose not to know about the pain, since knowledge can be acquired without the use of attention; and there are limits to how inattentive I can be to my inner states—I can’t choose not to attend to an intense pain or a compulsive thought. In the special case of meaning, it is difficult to see how we could mean something and fail to attend to what we mean: communicative speech acts require attention (probably because of the nature of the intentions involved). In general, however, attention is more optional than knowledge—attending is an act.

  [3] I hope it is clear that I am not saying that it is possible to “decide to know” in the sense in which it has been denied that we can “decide to believe”. We cannot, in this sense, decide to know or believe—and the concept of optional knowledge is not intended to conflict with that. The point is rather that we can decide to find things out, or decide not to—that is, undertake procedures that will produce knowledge and belief. In the case of forced knowledge, however, we cannot decide to find things out, or decide not to; we will be supplied with the knowledge anyway. The acquisition of knowledge is unavoidable in the one case but not in the other.

Share

Footnotes to Plato

                                               

 

 

Footnotes to Plato

 

 

“The safest general characterization of the European philosophical tradition is that it consists of a series of footnotes to Plato.” This remark from Whitehead’s Process and Reality (Pt. II, ch. 1, sec. 1) is frequently cited, either as a tribute to Plato’s greatness or as an indictment of the stasis, and hence poverty, of European philosophy. The suggestion is that philosophy (of the Western variety) has not substantially progressed beyond the seminal work of Plato. It is not often contested. But even a casual examination of the history of the subject shows that it is quite mistaken. The main reason it is mistaken is that a footnote must be consistent with what it is a footnote to, but philosophy subsequent to Plato has been anything but consistent with Plato. So it is not plausible to suggest that this philosophical tradition is merely a series of footnotes to the work of Plato. The bulk of Western philosophy has, in fact, been in contradiction to Plato—a rejection of his central tenets.

            Take Aristotle, Plato’s most successful pupil: his philosophy is built around a rejection of Plato’s philosophy. The theory of forms is Plato’s most distinctive contribution, and Aristotle denies it. That’s not a fawning footnote; it’s a critical response, an outright repudiation. And most later philosophers sided with Aristotle—the theory of forms found few adherents in subsequent centuries. In fact, it was Aristotle, not Plato, whose corpus was heavily footnoted by posterity: medieval philosophy in Europe was a series of footnotes to Aristotle. He was loyally, not to say slavishly, followed. Nor did Socrates, Plato’s hero, have much of a following during this period. His skepticism was not admired and imitated, while Aristotle formed the basis of medieval scholasticism. Plato is really quite a subversive philosopher, not one for pious repetition. The forms don’t invite footnotes, but frowns; they can be seen as rivals to the divine. And people don’t want to be told that their ordinary view of the world is a complete illusion: that they live in an epistemic cave.

            Nor was modern philosophy much influenced by Plato, beginning with Descartes. Apart from the doctrine of innate ideas, there is nothing distinctively Platonic about Descartes’ philosophy. The same is true of the other modern philosophers. They broke out in new directions; they hardly seem to have read Plato. How is British empiricism a footnote to Plato? And more recent philosophy is likewise quite anti-Platonic: how on earth can we view the movements of twentieth century philosophy as footnotes to Plato? Where is the theory of forms in positivism, ordinary language philosophy, logic, semantics, materialism, and so on? I am more inclined to say that recent philosophy has been disdainful of Plato (wrongly so, in my opinion). Quine—a writer of laudatory footnotes to Plato? Wittgenstein—a devoted Platonist?

            So Whitehead’s oft-quoted remark is egregiously false. Why then do people keep repeating it? Well, it sounds clever, and it saves you from having to read anything since Plato. But perhaps there is a more charitable reading of it: not that post-Platonic philosophy is slavishly Platonic—a mere accretion of obsequious footnotes to the Great Man—but that European philosophy consists of a series of comments on Plato. That is, it consists of critical responses to Plato. If we take it that way, then talk of footnotes is quite misleading, since (as I said) footnotes have to be consistent with what they are footnotes to. It is quite another thing to say that European philosophy consists of a series of rejections of Plato. That has the look of something with a decent claim to truth, though very broad-brushed (not exactly “the safest general characterization of European philosophy”). On this interpretation, philosophy consists of a series of reactions against Plato.

The trouble here is twofold. First, it ignores the steady stream of Platonists and neo-Platonists that flowed from the original font: it may not have been wide, but it existed. Second, it is really not true that philosophy has consisted of commentary on Plato, either pro or con. Granted, some of it has taken this form, though largely via the influence of Aristotle–not surprisingly, since Plato’s works constitute the founding texts of European philosophy (anyone who gets there first will inevitably form the foundation of the tradition). But it is an exaggeration to suggest that philosophy since Plato has not gone beyond his concerns, intellectual framework, and conceptual apparatus. Aristotle certainly did, adding quite new topics and avenues of inquiry; he is not stuck in a Platonic universe, making small emendations to the master’s system. Nor is Descartes limited by Plato’s outlook, most obviously because of his interest in the new science. And twentieth century philosophy quite clearly expands well beyond the Platonic conception of the subject: it ignores Plato rather than dissents from him (compare the influence of Kant). He is regarded as distinctly passé. Even to take Plato seriously as an adversary, as Aristotle did, is alien to the spirit of recent philosophy.

            It is hard, then, to find any merit in Whitehead’s pronouncement. It is like saying that modern philosophy consists of a series of footnotes to Descartes—at best an exaggeration of the influence of a philosophical giant. So far from philosophy being a series of footnotes to Plato, I would say that subsequent philosophy should have footnoted Plato more. He should have figured more prominently in the discussions of the philosophers who followed him. It was probably the influence of Aristotle that kept Plato from his proper place in the footnotes of European philosophers. Gazing down from Platonic heaven, he is not so much gratified by all the footnotes extolling him as irritated at his lack of citation. After all, he was a very singular philosopher, by no mean a popularist.

 

Share

Extended Anatomy

                                                Extended Anatomy

 

 

It is customary to distinguish between an organism’s body and its environment. The environment is what exists outside the body. There is a definite boundary between the two. But how solid is this distinction? And does it matter to theoretical biology?  Might there be a better way to carve things up?

            Consider mollusks and their shells. Is the shell part of the body or part of the environment? We could say that the shell exists in the environment of its soft interior organism or we could reckon it to be part of the organism’s body. It seems arbitrary what we say and to be of little consequence. We might choose to say that the shell is not part of the organism’s soft tissue body, but is part of its overall body.  [1] Should we say that the shell protects the body without being strictly a part of it? What about hermit crabs that scavenge the shells of sea snails? The shells perform the same kind of function as the shell of the oyster, but the physical connection is less rigid. This function is like the function of thick hide or the armor plating of some reptiles. Whether the protective outer layer is detachable is beside the point: the function is the same. It seems clear to me that there is no point in insisting that one sort of layer is part of the body and another is not. Thus we can introduce the concept of the extended body. We are familiar with the idea of the extended phenotype—the idea that things outside the body of the organism can be part of the phenotype of the organism, e.g. beaver dams and bird nests. I am suggesting that the body can be extended too, so the body is not as local as may be thought. We might distinguish between the restricted body and the extended body or introduce other distinctions based on other boundaries; the important point is that the notion of the extended body is a well-motivated notion.

            Suppose we have an animal with a thick furry coat that it sheds in the summer and also carries around a large wooden container to sleep in and protect against predators. What belongs to its body? The coat has no feeling in it and is shed every year, so it has less claim to be part of the animal’s body than its fleshy innervated parts; yet it is not false to say this coat is part of the animal’s body. It isn’t part of what might be called the fleshy body, but it is part of a more extended body. Likewise the container is not spatially within either the fleshy body or the furry body, but it is part of a more extended body—what we might call the functional body. The container, like the furry coat, helps the animal survive—detachability is not to the point. There is no mileage in insisting that these things are not really parts of the body, but belong to the environment; there is no principled distinction to be drawn here. We can distinguish different types of body nested within each other, but we can also speak of the combination of all of them; and the latter is what corresponds to the biological notion of a functional unit. A good way to put the point is that the entities in question are (because they function as) organs of the body: the oyster’s fixed shell, the scavenged snail shell, the removable coat, and the portable container. Thus some bodily organs exist on the far side of the skin (not very surprising in view of claws and fingernails). We can call this the “extended body” in order to register the fact that other choices might be made about where the body ends and the environment begins; in reality, these “external” organs are just as much part of the body as kidneys, hearts or brains. There is no sharp theoretical line here.

            How far outwards can we extend? Here things become trickier, partly because we don’t have any good existing examples to work from. So let us invent some hypothetical cases to focus our intuitions. Suppose an organism uses suction pads to pick up bits of the world that it uses in various ways—as weapons, as sun protection, as temperature control, as mate attractors. Suppose one of its tricks, genetically determined, is to scavenge fur covering from dead animals, which it dons in cold weather. I say that all this cargo counts as part of the animal’s extended body not part of its environment (inasmuch as that distinction has much content once the extended body is accepted). Such an organism might even pick up large chunks of the world for its use: trees, boulders, lakes, mountains. This super-organism would have an extended body that includes large tracts of the physical world, normally supposed part of the environment not the body. By the same logic as before its extended body would extend massively into the environment. What if there was an amazing bird that made its nest in a tree but carried the tree around with it when it moved? This bird would be just like the hermit crab that carries its home around with it. Accordingly, we could reckon the tree to the bird’s extended body: fleshy body, feathery body, and arboreal body. But why does the bird need to carry the tree around in order for it to be part of its body? Isn’t a stationary tree also part of a bird’s extended body? It uses the tree for its biological purposes, as it uses its beak, feathers, and nest. For a bird, a tree is an organ of survival—a device of gene propagation.

            What about caves? If a bat lives in a cave, serving the same function as a shell, isn’t the cave also part of the bat’s extended body? The cave has the function of protecting the organism as far as the organism is concerned (though not as far as the cave is concerned): isn’t it arbitrary to introduce a sharp line between the cave and other features of a bat’s existence? The cave is in effect a tool the bat uses in order to survive, as the shell is a tool that the mollusk uses in order to survive. The same is true of burrows, dens, crevices, tunnels, and so on. These are all functional survival instruments, like organs of the body. By the same reasoning beehives and ant nests are part of the extended body of these creatures: they can detach themselves from these body parts, but that proves nothing—the same is true of hermit crabs. The bee and its hive function together to enable the bee to survive, just like its other organs; thus the hive is an organ of the bee’s extended body. Suppose the bee always took a mini hive with it whenever it left the big hive as protection and was never parted from its mini hive: wouldn’t we then naturally reckon it to the bee’s extended body? But the big hive is really no different, just more stationary. So the extended body can extend outward to spatially more remote locations. There is no conceptual problem with this: a deer might remove its antlers and leave them at home while going on a peace mission without thereby rendering them no longer part of its body (it reinserts them when it gets home).

            Can we extend the body even further? What about an amphibian that carries its own water supply around with it? It produces a membrane that traps water around its body. This could be useful in the event of excessive evaporation. Wouldn’t this tank of water be functioning as an organ of the animal’s extended body? What about a true super-organism that could drag stars between galaxies as it hops around the universe, using them as sources of heat? Wouldn’t these stars be part of its extended body? It might actually swallow stars and keep them burning to bring a little warmth to the intergalactic trips. Could we even maintain that the extended body of an organism includes everything about its niche? Air for birds, water for fish, land for terrestrial animals: the extended body merges with nature as a whole. Take humans: we have conquered so much of nature, using it for our own purposes, converting it into tools (clothes, homes, motorways)—isn’t this all the extended human body? Where does the human body stop and the human environment begin? Once we accept that clothes are extensions of our body, where does it end? What belongs to the body is what the organism uses to achieve its biological purposes, and that includes a great deal. There is no clear theoretical distinction between internal organs like the heart and kidneys and external organs like shells, clothes, caves, and so on. From the point of view of biology the old distinction between body and environment is misguided and unnecessary. This means that anatomy can also be extended: the carapace is part of anatomy, but so is the shell, so is the spider web, and so is the cave or burrow. To our visual sense there seems to be a clear distinction between body and environment, because we naturally segment the world; but from a conceptual point of view the distinction is insignificant. The theoretical body is whatever web of things serves to enable the survival of the organism. The notion of the organism thus undergoes extension: the organism is the totality of things that are selected for or against (this is the extended phenotype). The web and the spider are co-selected, forming a whole—spider-combined-with-web. This extended body has various parts or organs, including legs, eyes, and web; the restricted body is just another part of the extended body. Spider anatomy must include web anatomy. The proper unit of biology is the extended body. If we spoke of the “corporeal make-up” of an organism, instead of its body, we would say that the corporeal make-up of a spider includes the web. To put it differently, the survival of the genes depends on the whole complex not just on the properties of the restricted body. The “survival machine” inside which the genes sit extends out to every functionally relevant thing. The genes don’t care about the distinction between two types of body, the extended body being all that matters to them.

            Perhaps we don’t tend to think this way intuitively because we think of organisms as resembling ordinary physical objects, which don’t have functions and are not selected for. A rock is a discrete bounded object with a well-defined environment. There are no extended physical objects in the sense intended here; “bodies” in this sense are localized. They don’t have functional shells that aid in the struggle for survival. Physics doesn’t have to deal with the extended body. But biology is the science of functional units subject to natural selection, so its ontology is constituted differently. If we think of biological bodies as on a par with physical bodies, then we will be inclined not to see that the extended body is the right way to carve things up. There is a discrete physical thing that corresponds to the spider’s restricted body, and it is different from the physical thing that is the web; but that doesn’t mean that the biological body is so divided—there is a biological unit that unites these two physical objects. I have been calling this the extended body, but from a theoretical perspective we could just call it “the body” and then define more restricted units such as the fleshy body. The embodiment of species consists of spiders and webs, oysters and shells, bees and hives, bats and caves, people and technology, and so on. Each pair is an operative biological unit. We need to be more holistic about biological ontology, in contrast to the atomism of physics.

 

  [1] Much the same could be said of cocoons: are they part of the body or not? The best thing to say is that the question has no clear answer because we have no definite notion of what counts as the body. We do better to distinguish different types of body corresponding to the same organism: thus the cocoon is not part of the butterfly’s flying body but is part of the butterfly’s metamorphosis body. The most theoretically useful concept of body would include the cocoon as part of the extended body. 

Share