Why Did Sex Evolve?

                                                Why Did Sex Evolve?

 

 

Some reproduction is sexual and some is asexual. There is no biological necessity about sexual reproduction, despite its prevalence. How could there be any such necessity, given that the basic principle of evolutionary biology is just that organisms are designed to maximize the presence of their genes in later generations? This says nothing about mixing genes or about a division between the sexes. Why not reproduce purely by cloning? Why isn’t all genesis parthenogenesis? This seems perfectly possible, much simpler, and evidently how reproduction began on planet earth. So why did sex evolve from non-sex?

            The problem is not just that no obvious adaptive rationale for sex seems to suggest itself; there are positive reasons why the existence of sex seems to violate basic principles of evolutionary biology. The most obvious point is what might be called “gene dilution”: instead of passing down a hundred percent of one’s genes, one passes down only fifty percent. In parthenogenesis a hundred percent are inherited, so surely an organism would prefer that figure to a mere fifty per cent. Genes that build bodies that pass on only a fraction of themselves will not be as frequent in later populations as genes that pass on a hundred per cent of themselves. Sexual reproduction appears to thwart the prime directive governing genes: maximization. Second, in sexual reproduction it is necessary to find a suitable sexual partner, whereas in cloning you can go it alone. That requires expenditure of energy and risk of failure: why take such a hard road when parthenogenesis enables one to stay home alone and get on with the job without going out in search of unreliable mates? Third, from the female’s point of view sexual reproduction looks a lot like altruism: she is providing a service to the male by passing on his genes, using her own energy resources, for which she does not seem adequately recompensed. She seems to be doing the male a favor, but organisms built by selfish genes don’t do each other favors. The role of female looks unacceptably altruistic. We need to show what is in it for her genetic prospects. Why tolerate males at all? Why aren’t all organisms female?

            Sex thus appears biologically paradoxical—inconsistent with even the most basic principles of evolution. We must find a way to explain its origin and persistence that comports with basic biology. Many theories have been proposed, which I won’t discuss here. I intend to propose, as economically as possible, another theory, which respects the game-theoretic selfish gene perspective now prevalent in the field. I call this the “genetic parasite theory”.

            Imagine a population of single-celled eukaryotic organisms (ones with a sheathed nucleus containing DNA) and suppose their mode of reproduction to be asexual. They are all, in effect, female, and reproduce by cell division, transmitting one hundred percent of their genes into each offspring. Now, life being what it is, opportunities for parasitism will arise: some of these cells may adapt to exploit the resources of other cells in the population, without killing them. Let us suppose that they attach themselves to the surface of host cells and siphon off nutrition found in the host. The host will resist such nutritional theft, but it may persist nevertheless, possibly following an arms race between parasite and host. Being a parasite is always a highly attractive option for any evolved creature, having all the benefits of theft over honest toil, and is not easily foiled (it is really just a kind of non-fatal predation). But suppose that some cells are more ambitious: they seek not just food but also reproductive assistance—they want to use other cells to pass on their genes, sparing themselves the expense and trouble. They therefore evolve a pointy organ that can penetrate the surface membrane of other cells and transfer their own DNA into the nucleus of the host cell, where it can enjoy the resources of the host cell in getting itself reproduced. These cells are not nutritional parasites but genetic parasites. They might even be able to replace entirely the DNA of the host cell, by inserting all their own DNA into the nucleus. That would mean that none of the host’s genes are transmitted and all of theirs are. By the laws of gene selection such a host would soon go extinct; its offspring would be copies of the parasitic cell. We would thus expect that counter-measures to the genetic infiltration would evolve, and an arms race would develop. The host cell might develop ways of poisoning the parasite or dissolving its genetic residue once inside or blunting its pointy organ. Let us suppose that an equilibrium point is reached in which fifty percent of the parasite’s DNA is permitted inside the host’s nucleus and fifty percent of the host’s DNA remains. Perhaps if the host accepts this amount less damage is done to it by the invasive cell, which will limit its aggressive incursions if the host cooperates to some extent (otherwise it will fight to the death). Still, this is a highly unsatisfactory outcome for the host, because of gene dilution. How might it adapt to this state of affairs? It needs to find a way to get something out of the new arrangement—some sort of genetic payoff.

            Some of the genetic parasites will contain better genes than others. If poor quality genes are mixed with the host’s genes, then the result will be less advantageous than if good quality genes are mixed. Given that the host is losing fifty percent of her genes in the new arrangement, it would clearly be better if she were to have good quality alien genes than poor quality ones, since the whole package will then do better, which is good for her genes. So she begins to favor parasitic genes that are better than others—she exercises quality control with respect to her genetic parasites. She becomes selective in her resistance. She is still not as well off genetically as she was before all this happened, when she reproduced by solitary cloning, since she is still suffering from gene dilution. But there is little she can do about it given the aggressive parasites she has to contend with. It is always better to have no parasites than some, but it can be better to have some parasites rather than others. You want the ones that can do you a favor in return, if that is at all feasible. Thus our host will want to select the best genes she can from her would-be parasites, because these will aid her own genes better than other parasitic genes will. The situation is still unstable, however, because there will be selective pressure to revert to the pre-parasite state of things, where all of her genes get perpetuated. She will want to resist the genetic parasites as much as possible, consistently with the arms race and the costs of resistance. How can things be made more palatable to her?

            Suppose that among her “suitors” a select few have genes with the following property: if they are combined with hers they will actually increase the chances of her own genes surviving into the future, relative to their chances without such combination. Given genetic variation in the population, some cells will be more viable than others—and these are the best ones to “mate” with. In other words, the optimal strategy for an invaded host cell is to select a parasite that will improve her genetic prospects—not just relative to other potential parasites but also relative to her chances without such parasites. If she mates with such a fine specimen, then her genes will actually be better off than if she reproduced all on her own. True, there will be fewer of her genes in the next generation, but by combining with genes superior to her own she will ensure that more of her genes will eventually survive and reproduce. The result of this kind of upgrade combination is a “leg up” in terms of genetic survival, compared to the way things used to be. This, then, is how sexual reproduction evolved as a stable mode of reproduction: genetic parasitism combined with genetic selection that provides a “leg up”. What started as straight parasitism, with the usual arms race and compromise, turned into a kind of symbiosis, when the female improved her genes’ chances by selecting high quality “male” genes. Now there was something in it for her, beyond simply minimizing the bad effects of determined parasites. If she could have won the arms race against the parasites, reverting to her untroubled asexual mode of reproduction, that would have been quite satisfactory; but it is even better to find a way to exploit the parasite by selecting only parasites that serve her own genetic interests better than the old regime. And if she could not win the arms race anyway, it is better to turn a fait accompli into an unexpected triumph: the female cells that are better at selecting the male genetic parasites with the best genes will do better than those that are not so good at this. Thus we get competition among males to be selected and competition among females for the best males. 

            Here are a couple of analogies to bring out the logic of the situation. Suppose there was a parasitic worm that could actually affect the DNA of its host: it secretes a chemical into the DNA and changes its composition, producing new genes. Suppose some of these worms produced worse DNA and some produced better DNA. Clearly it is to the advantage of the host organism that it selects the worms that improve its DNA, since these will then have a better chance of being passed on. It might be better to have no such worm, but given that this is unavoidable, natural selection will favor the “good” worms over the “bad”. And maybe there are some worms so good that it is better to have them than to have no worms at all—since they can build organisms greatly superior to any built by the host’s original genetic composition. These super-worms produce simply outstanding children for the host. Similarly, alien DNA (deriving from a “male”) might so improve the female’s gene complex that the necessary genetic dilution is acceptable, according to the genetic calculations. Fifty per cent of my genes surviving for a thousand generations is a lot better than one hundred percept surviving for only ten generations. It is like five of my ten children living to be a hundred with the other five dying in childbirth, compared to all of them living only to the age of three. A worm that re-tooled your genes to make them substantially better at surviving would pay its way in the unforgiving genetic arithmetic. Genes for tolerating such a worm would be more likely to be passed on.           

            The second analogy concerns coalitions. If someone comes to you to form a coalition in order to secure some future benefit, you must ask yourself a simple question: am I better of with her or without her? If I can secure the benefit without the coalition, I do well to decline her offer, since I would then have to share the benefit. But if I judge that I cannot achieve the end without her help, then I should join with her in a coalition, since I will get nothing otherwise. Sexual reproduction has the same logic: if my genes go it alone they have a certain probability of surviving to reproductive age (maybe zero), but if I combine them with someone else’s genes (losing fifty per cent of them, say), then there is a different probability of their survival. If the latter exceeds the former, then I am rational to choose the latter over the former. A potential mate is like someone offering you a coalition: if you mate with me I assure you the chances of genetic happiness are high, compared to the chances if you mate with someone else or just decide to go it alone. Suppose you happen to have both means of reproduction available to you, sexual and asexual. You have to choose which to employ. You compute the payoffs by multiplying the number of your genes that will get passed on by the probability of their survival (over, say, the next million years). If a hundred per cent get passed on by the asexual method, but there is a low probability of their long term survival, you might opt for the sexual method where fewer get passed on but the probability of survival is much higher—but only if you believe the “donor” genes have this kind of survival power. In the same way, your coalition mate has to be good enough to warrant dividing the spoils with her later, or else you will choose to go it alone and keep all the spoils for yourself. Sex arises from genetic coalitions, possibly preceded by genetic parasitism. This is better for both parties, because there is something in it for the host and the parasite benefits because its incursions are no longer resisted. Thus we move from resistance to consent: the male benefits but so does the female. This solves the problem we started out with, which was to explain how sexual reproduction could make sense given that asexual reproduction seems so much more sensible biologically. The answer is that it results from a strategy for dealing with genetic parasitism.

            Let me restate the point in less abstract terms. Why should a female mammal allow her womb to be colonized by a male mammal with a different set of DNA? Why should her energy resources be diverted into generating his child? He should take care of it himself! He is just freeloading off her womb and energy resources. The suggested answer is that she is gambling that the influx of his genes will improve the prospects of her genes. Given that the male would just parasitize her womb anyway, it is better to be selective about mates and try to improve her own chances in the genetic lottery. In the payoff matrix that describes all the options, with their various costs and benefits, sexual reproduction seems the best choice—the best compromise, we might say. Asexual reproduction makes perfect biological sense—it is how a well-meaning Creator might have arranged things—but given the rough and tumble of evolution, with the ever-present threat of unscrupulous parasites, sex has emerged as a kind of game-theoretic solution to an inevitable problem: namely, what to do about those pesky genetic parasites. The parasites are with us always, given the attraction of that line of work; the question is what can be done about them. And the answer in one word is: compromise. Sex is a kind of accommodation to the harsh reality of biological existence—ultimately, access to energy.

            If this explanation is on the right lines, what might we expect to characterize animals and their sexual behavior? One point has already been mentioned: we would expect male competition for females, selectivity from females, and competition within females for outstanding males. Of course, these are all abundant features of animal behavior. Correspondingly, we would expect female sexual anatomy to conform to the general theoretical picture: it should not be too easy to impregnate the female, which would impair her ability to be selective. Her consent to copulation should be required for copulation to be feasible for the male. Rape should be, at least, difficult and potentially hazardous. At the same time, copulation should not be so difficult that a suitable male is just not up to the task: hence difficult but not too difficult. We might also expect sex to be somewhat predatory, given that the male is always essentially exploiting the female, in order to spare himself the effort of gestating offspring; and the female will always be wary and choosy, wondering if this suitor is really “the one”. She is giving up a lot to incubate his progeny, in terms of energy and commitment; so she has to be sure there is something in it for her (i.e. her genes). The genes of the female must move her in such a way that their interests are respected, even though only fifty per cent of them will end up in the next generation. The genes of the male have no such concerns, given that he is not called upon to act as incubator; and he can spread those genes around ad libitum. The underlying logic of sex predicts these kinds of phenotypic facts, and they are evident enough in animal behavior. Fundamentally, the male is still the aggressive parasite and the female the reluctant host trying to make the best of a bad job. Of course, once the sexual machinery is in place and the female has no other reproductive option, she will act with enthusiasm and commitment; but the genesis of sexual reproduction is still written into the underlying structure of the sexual relationship.

            A less obvious consequence concerns sexual selection. The female exercises quality control: she evaluates her potential mates by formulating hypotheses about their genetic fitness, based on what she can observe. She cannot peer directly into the suitor’s genes but must go by outward appearance. Thus she espies the peacock’s lavish tail and infers genetic superiority within. This causes males to improve their appearance so that females will evaluate them highly: but to improve their appearance they have to improve their reality. They have to be bigger and stronger, less infested with parasites, and more able to sustain pointless bits of flamboyance. So there is selective pressure on males to improve. Thus sex leads to sexual selection, which leads to improvement. That is, sex is what powers evolution to produce ever more complex and accomplished animals, via sexual selection. But asexual reproduction has no such consequence: the organism just reproduces itself according to its original design. There is no sexual selection when reproduction is asexual, and hence no motor to drive biological progress. The result is likely to be stasis, uniformity, and dullness. You don’t get complex beautiful animals when the method of reproduction is asexual. It is not, of course, that anyone is aiming for such complex beauty; it is just that the mechanism for producing it does not exist in a world without sexual reproduction. We owe it to sex to kick start evolution into a higher gear; before sex the pressures for change were minimal. It was when females started to be choosy, as a way of making the best out of living in a world of genetic parasites, that sexual selection triggered the kind of evolutionary changes that we see. Without sex Earth might never have got beyond boring bacteria floating in nondescript oceans. We owe it to the parasites among them to have initiated a process that led to the impressive variety of animal life that now exists.

            Here is one final point–a kind of theorem: a genetically perfect female has no rationale for engaging in sex in a world in which she is subject to genetic parasitism. If she cannot improve her genetic fitness by merging her genes with those of a male, then she has no motive to permit her body to be used as incubator. For her, all genetic mixing is genetic degradation. She therefore has every reason to fight off all male incursions. But the same is not true of a genetically perfect male: he still has every reason to reproduce sexually, since he can thereby produce more copies of his genes than by solitary cloning. He just has to deposit them in as many willing (or unwilling) female bodies as possible. There is a huge logical asymmetry between being the one with the incubating body and being the one who uses someone else’s body as incubator. That asymmetry is the real basis of sexual reproduction (and indeed ultimately defines the difference between the sexes). Genetic perfection in the female leads naturally to frigidity, but genetic perfection in the male entails no diminution of sexual appetite. In other words, the genes see no point in sexual reproduction for the genetically perfect female, but they see a lot of point in it for the genetically perfect male.  Of course, there is no such thing as genetic perfection in the real biological world–but I am making a purely logical point.

 

Colin McGinn

 

Share

Therapy and Theory

                                               

 

 

Therapy and Theory

 

 

In section 255 of Philosophical Investigations Wittgenstein remarks: “The philosopher’s treatment of a question is like the treatment of an illness.” In section 593 he says: “A main cause of philosophical disease—a one-sided diet: one nourishes one’s thinking with only one kind of example.” In section 133 we find: “There is not a philosophical method, though there are different methods, like different therapies.” On this slender basis some interpreters have taken Wittgenstein to maintain two things: (a) philosophical perplexity is a kind of illness or disease and (b) philosophy is a therapeutic not a theoretical enterprise. I don’t think the passages cited support such an interpretation, but I am not concerned with Wittgenstein exegesis here. I want to ask whether there is any merit in theses (a) and (b) regardless of whether Wittgenstein advanced them. On the face of it the two theses are implausible: there is no recognized illness, medical or psychiatric, that consists in asking philosophical questions; and it is not true that philosophy consists in applying any recognized kind of therapy—psychotherapy, physiotherapy, or chemotherapy. No one feels ill, physically or mentally, when doing philosophy—if they did, we would be wrong to impose philosophy on people. We are not making people sick by having them study philosophy. Nor are we trying to cure people of any malady when conducting philosophical discussions with them, unless all intellectual discussion is deemed therapeutic just by being persuasion. It is thus hard to take such doctrines literally; and perhaps it is wrong to do so—Wittgenstein must be speaking metaphorically. It is as if philosophy is a disease and as if the correct philosophical method is therapy. But in what way is the metaphor meant to be illuminating—what similarities does it capture?

            The obvious way to proceed with this question is to ask whether any recognized illness has a counterpart in the case of a philosophical opinion or doctrine. Presumably we are speaking of mental not physical illness, so the question is whether any of the usual mental illnesses have a philosophical counterpart: bipolar disorder, depression, obsessive-compulsive disorder, schizophrenia, etc. I will suggest that there is at least one philosophical issue that fits such a description in relation to schizophrenia: there is a similarity between the symptoms of schizophrenia and the way philosophers have reacted to this issue. I mean, perhaps surprisingly, the issue of the semantics of definite descriptions: certain semantic theories of descriptions involve something like a main symptom of schizophrenia, namely ontological delusions. The schizophrenic characteristically believes in the existence of unreal things—he or she has a faulty sense of reality. Often these unreal things feed into paranoid fantasies, but they may also be merely fanciful. The patient confuses imagination with reality: merely imaginary objects are taken to be real objects. Most of us can distinguish what we imagine from what is real, but the deluded patient has lost this ability. The distinction between the real and the imaginary may become blurred or obliterated: there is no robust sense of reality, but a tendency to conflate fantasy and reality. The therapeutic task is to restore a sense of reality, whether by drugs or conversation or psychological exercises. Something is wrong with the patient’s mind, since there are no such objects in reality, and the doctor’s task is to cure the patient of the problem. Essentially the task is one of ontological rehabilitation.

            The relevance of all this to the semantics of descriptions should now be obvious: Meinong’s ontology is the counterpart to the schizophrenic’s delusions. I don’t mean to suggest that I myself find Meinong’s theory as preposterous as a madman’s delusions; I just mean that I can see a point in the comparison. When early Russell accepted Meinong’s theory he felt compelled to believe it through lack of any alternative theory; he didn’t believe it because of the evidence of his senses or because it was plain commonsense. There was something funny about the belief. We can imagine golden mountains, to be sure, but to count them among the real things of the universe strikes most people as pushing it. Russell was disturbed by this belief; it felt wrong to him. Maybe it even felt slightly mad. He would like to be cured of it—but how? The theory of descriptions came to the rescue: it enabled him to dispense with those spooky entities, and hence regain his intellectual sanity. The theory came as a relief, a release. Similarly, a schizophrenic experiencing a sudden cure might feel a sense of relief that those imagined objects are not real after all. They are just figments of the imagination, recognized as such. The ex-schizophrenic reevaluates his ontological commitments, experiencing relief; similarly, the ex-Meinongian sees his old ontology as misguided in the light of Russell’s theory, also experiencing relief. In both cases the disease was ontological excess, aided by an overactive imagination, though the treatment is different in the two cases—drugs in one case, logical analysis in the other (the therapy is a theory in the case of descriptions). But in both cases it is fair to speak of ontological delusion, disorders of thought, a sense that something is amiss, and a welcome release from error. There is thus a point to applying the notion of mental illness to philosophy, given that some mental illness is characterized by ontological misfiring—a distorted sense of reality. A schizophrenic who presented with symptoms of belief in golden mountains, unicorns, witches, and ghosts would be true to type.

            Ramsay described Russell’s theory of descriptions as “a paradigm of philosophy”: the suggestion would appear to be that his method generalizes. And it is true that philosophy is marked by ontological theories that seem extravagant, fanciful, and not quite sane. In addition to Meinong we have Plato, Descartes, Berkeley, Frege, early Wittgenstein, Godel, Lewis, and many others. Philosophers are forever positing stuff at which common sense recoils. That “incredulous stare” that Lewis spoke of is common because of all the incredible ontology bruited about by philosophers. Now again, it is not that I myself think all such ontology can be dismissed out of hand; my point is just that describing belief in it as analogous to a mental disorder is not misguided given that you reject it as crackers. The suggestion is that there is an analogy between this kind of ontology and the delusions of a schizophrenic. In both cases the human imagination has led to the belief in monsters (again, according to those who reject the entities in question). Philosophical error has a special character, shared by the delusions of the insane: it consists of ontological derangement. For a nominalist or a materialist or a positivist, philosophy is replete with ontological excess, just like schizophrenia. Those in the grip of such an inflated ontology don’t see it; they take themselves to be perfectly sane reasonable people. But to an outsider they seem prone to ontological pathology: they seem to have lost their ontological bearings. Thus one philosopher might exclaim to another, “That’s bonkers!” meaning that the ontology espoused goes beyond the bounds of sanity. And indeed a philosopher who accepts the ontology may feel proud of his courage in being so willing to abandon common sense. Part of the joy of philosophical madness is the feeling that you have seen what others have not seen—platonic universals, subsistent objects, Cartesian selves, possible worlds, Kantian noumena, the World Spirit. There is something intoxicating about all this—exciting, exhilarating, blood pumping. This is not everyday ontology but philosophical ontology—the kind that promises grand new vistas. Again, I am not saying that all such theories are signs of madness—just that it is natural to describe them that way if you don’t feel their charms. These are the kinds of things that madmen believe in—as opposed to tables and chairs, dogs and cats, atoms and galaxies.

            This does not apply to all of philosophy, obviously. It would not be appropriate to talk about political philosophy this way, or philosophy of biology, or normative ethics. This is metaphysics. So the thesis should really be that (some) metaphysics can be seen as analogous to mental illness, and hence in need of cure and therapy. When we think about metaphysical questions we are tempted by crazy ontological theories: that is, our mental state is analogous to the state of a madman in the grip of wild imaginings. We start to believe in what is merely imaginary. It is as if we have gone mad. The cure is not drugs or other non-discursive treatments; it is argument, theory. But argument and theory are often ineffective once the metaphysical madness has taken hold—as they are with ordinary madness. Someone who persists in subscribing to Meinong’s ontology in the face of Russell’s theory might well be accused of irrationally hanging on to crazy ideas when it has been demonstrated that there is no need to do so. Such intransigence might well be castigated as symptomatic of metaphysical derangement.

            Is there any other area of intellectual inquiry that merits, or might merit, the language of disease and treatment–or is it just philosophy? Do we find mad mythology masquerading as literal fact anywhere else? Are misguided practitioners regarded as suffering from mental illness in other disciplines? Certainly I have never read a text in another field in which the author says: “The so-and-so’s treatment of a question is like the treatment of a disease”. Not history, not geography, not physics, not biology, not psychology. No one else seems to think that his subject’s questions arise from mental illness. It is true that physics and psychology can lead to outbreaks of ontological extravagance, but that is surely because they have philosophical content. It is not madness to believe in electrons or the unconscious, as it might be thought madness to believe in golden mountains or immaterial substances. So philosophy (metaphysics) does seem unique in attracting such descriptions, and hence approximates to mental illness in a way that other disciplines do not.

Whether this is a bad thing depends on your view of mental illness: it is assumed to be a bad thing by the orthodox, but there have been others who have found in mental illness a higher form of sanity. Maybe Plato was mentally ill to believe in his Forms, but his views might still be truer than saner people’s. Better to be mad and right than sane and wrong, it may be said. We might do well to encourage metaphysical madness in the young, knowing that it does little or no real harm (unlike actual madness): it expands the mind. A bit of madness in philosophy (and elsewhere) might be a healthy thing.

 

Colin McGinn       

Share

The Language of Emotion

                                                The Language of Emotion

 

 

Proponents of the language of thought typically don’t have much to say about emotion. We are said to deploy an internal language when we think, but it is not suggested that we do so when we feel. Internal speech is characteristic of thought but not of emotion—we don’t “feel in words”. And the same might be said of desire: the idea of a “language of desire” has not met with enthusiastic acceptance (or even formulation). Language has to do with the cognitive part of the mind not the affective. Perhaps theorists think that the affective part of the mind is what we have in common with non-linguistic animals and so is not an appropriate object for linguistic explanation; only thought calls for linguistic representation. Emotion and desire are like bodily sensations: no one thinks that pain and pleasure should be analyzed linguistically—to be in pain is not to say to oneself “That hurts!” and the taste of pineapple is not an inward utterance of “Lo, pineapple”. Emotion just doesn’t have this kind of intellectual sophistication: it has no grammar or logic, no internal discursive structure. Emotions, like sensations, don’t entail each other or have subject-predicate structure. So it may be supposed.

            But is that true? Take fear: we can fear that p as well as fearing x. For example, yesterday I feared that I would collide with a car that pulled out in front of me. Fear has propositional content: at the moment I slammed on the brakes I was afraid that I was about to have an accident. This is the same proposition that I believed to be true—in fact, I feared its truth because I believed it to be true. I thought that a collision was imminent and so I feared that a collision was imminent. We can recognize this connection between mental states without committing ourselves to a cognitive theory of emotion: it is simply a fact about our psychological economy. Of course, if emotions arethoughts (or essentially incorporate thoughts), then we can derive a language of emotion directly from a language of thought, but even without that assumption it is evident that emotions are (or can be) propositional. If emotions of fear have propositional content, then they have logical form, in virtue of the propositional object of the emotion. And so they have logical entailments—the content of my fear entailed, for example, that someone was about to have an accident. But then the case for a language of emotion is exactly as strong as the case for a language of thought, insofar as the latter case rests on the propositional content of thoughts. One of the main arguments for LOT is the productivity of thought, but emotions are also productive in this sense, since they invoke conceptually structured propositions—so we have the same argument for LOE. I can fear that I will not be selected for clemency just as I can believe that I will not be selected for clemency, and I can fear that I will be captured by the enemy and then tortured just as I can believe that conjunctive proposition. I can fear the same propositions that I can believe, including those built by logical operations like negation and conjunction. Thus emotions are logically structured, combinatorial, finitely based, and potentially infinite—just like beliefs and thoughts. If there is a LOT, then there must be a LOE.

            It might be wondered whether emotion verbs accept every complement clause that cognitive verbs accept. Can we fear everything we can think? Can we feel sad about every state of affairs that we can believe to obtain? Can we be disgusted by everything to which we can assent? For example, I can believe that necessarily 2 + 2 = 4, but can I fear that necessarily 2 + 2 = 4? Can I feel sad that gravity obeys an inverse square law? Can I be disgusted that Hesperus is Phosphorous or that modus ponens is a valid rule of inference? With sufficient ingenuity we could probably contrive situations in which each of these peculiar emotions could be felt, though they are certainly not part of the normal run of things. But we don’t need to establish full correspondence between thought and emotion in order to recognize that emotions have an extraordinary variety of complex propositional objects, and that they therefore qualify for linguistic analysis given that thoughts do. Just as we think in a language, so we feel in a language—the content of our emotions has a linguistic underpinning. Other animals may not, just as other animals may think without deploying an internal language (possibly in images). But human emotions, like human thoughts, have a degree of conceptual sophistication that invites the idea of a LOE. Indeed, if we call the human LOT “Mentalese”, we can say that the LOE is also Mentalese: we feel in the same language in which we think. Why would we (or our genes) deploy two distinct languages for these two tasks? And if the propositional character of emotions derives from their cognitive component, we would expect that Mentalese would simply carry over to LOE. Thought and emotion would then share a common underlying symbolic system, with the same grammar and lexicon.  [1]

            The picture that results regards Mentalese as an internal language suitable for deployment in both thought and emotion (as well as desire, since we have complex logically related desires too). We might take it to be neutralbetween cognitive and affective uses, not privileging thought over emotion. It is not that we first have a language specifically of thought and then co-opt it to serve our emotions; rather, we have a neutral language that can be deployed for both thought and emotion. The Mentalese language faculty is a psychological module ready to be exploited by different parts of the mind—a general machine that can be used for different purposes. It doesn’t have thought built into it any more than it has emotion built into it; it’s more abstract than that. It is a language of mindgenerally (LOM). Thus LOM can be employed as an LOT or as an LOE. Some theorists might wish to go even further in divorcing LOM from thought specifically by suggesting that emotion and desire are primary in the mind. These theorists might maintain that desire and emotion precede thought in evolution, and that they require a symbolic medium in order to achieve their purposes optimally. Thus there was an LOE before there was an LOT: LOT is a later adaptation grounded in LOE. Maybe LOE evolved in fish long before anything deserving the name of thought arrived; then thought came along and recruited LOE for its purposes. There is no need to privilege the cognitive just because one adopts an internal language theory of mental operations. To put it differently, a computational model of mind is not committed to taking thought to be primary in the mind. Conceptually structured emotions (or desires) might be more basic than conceptually structured thoughts. Emotions are clearly important biologically, as well as being ancient, and having a sophisticated structure clearly aids their effectiveness. The affective is discursive. 

            When it was believed that thoughts consist of mental images the idea of a language of thought held little appeal; similarly for the theory that thoughts are behavioral dispositions. It took appreciation of the propositional nature of thoughts for LOT to gain traction—theorists had to accept that a thought is always a thought that p. Likewise, if we think of emotions as bodily sensations (as with many traditional theories), or as dispositions to behavior, then we will not appreciate their propositional nature. But once we accept that fear and sadness are fear and sadness that p, we are prepared to accept that emotions are underwritten by an internal symbolic system. The important move in both cases is accepting the correct logical analysis of ascriptions of thought and emotion.  [2]Once philosophers had grasped how reports of thought worked they were ready to take the plunge into LOT, but they don’t seem to have appreciated that emotion reports are much the same, so that a dip into LOE might be indicated too.

 

  [1] There are also such attitudes as hope and trust: these are clearly propositional and close to belief and thought. If thought comes with an internal language, surely hope and trust do. But these attitudes have an emotional dimension, so we are already close to a language of emotion. In fact, the whole distinction between thought and emotion is quite artificial, so we should expect a general theory that subsumes both. 

  [2] I mean such things as referential opacity, the de re/de dicto distinction, the connection between entailment and logical form, the notions of sense and reference, semantic externalism, and so on. These are the things that encouraged philosophers to postulate a language underlying thought (Fodor needed Frege and Quine), but the same points apply to emotion and desire. The mind is thoroughly propositional, a subject of that-clauses.

Share

Symmetry and the Mind

                                                Symmetry and the Mind

 

 

Symmetry is a pervasive feature of nature. We find it in atoms, molecules, crystals, planets and stars, as well as in the entire biological world, and also in human artifacts. Some things show no symmetry, such as rocks or puddles of water or sponges; but these are exceptions to the rule. Moreover, symmetry presents itself as the opposite of chance: symmetry suggests organization and order, not randomness and chaos. It also suggests harmony and beauty. The biological world appears to be run on symmetry, of complex and impressive kinds. Some organisms have radial symmetry, such as jellyfish, but the majority of more advanced organisms exemplify bilateral symmetry, which gives rise to a left-right structure. We have body plans based on a central axis of symmetry and duplication of body parts around this axis: two legs, two eyes, two ears, two arms, and so on. These appear as mirror images of each other, as counterparts, copies. An organism could be made of one of each and still be functional, but the duplication builds in redundancy, as well as cooperation between like organs. We can imagine planets with animal life not organized around symmetry, but if life on Earth is anything to go by, symmetry is fundamental to evolved life. Symmetry must be adaptive, despite what can appear to be pointless duplication (why don’t we find more parsimonious ceatures that make do with one eye, one ear, one hand, and so on?).

            The abstract essence of biological symmetry is the cooperation of duplicate parts: each side of the body works together with the opposite side, to give more than one side alone can give. Two eyes are better than one, etc. This basic morphological principle extends to the brain, which itself is organized around a central axis into two similar hemispheres, as if animals have two brains that cooperate. The genes have built bodies that exemplify symmetry as a matter of biological law; and what the genes do they do for a reason. The genes themselves are symmetrical, with that symmetrical spiral of DNA. The idea of a duplication of parts clustered around a central axis is evidently deep and functional. Human art celebrates it; it is written into the daily experience of our bodies and other things; and it is difficult to conceive of a workable world without it (though it is frequently less than perfect or exact). The human animal, in particular, is a symmetrical being, and thus conforms to a basic principle of nature.

            Yet there is a striking and strange anomaly: the mind does not appear to exemplify symmetry. The mind is a biological entity, evolved by natural selection, composed of parts (“mental organs”), housed in a bilaterally symmetrical body and brain, and yet it exhibits no apparent symmetry. Why? Why should the mind be an exception to the general rule? Why does it not contain separate duplicate parts that cooperate together? If the architecture of the body and brain involve symmetry, why does the architecture of the mind not? If symmetry is adaptive in the body, then why is not adaptive in the mind? Why is the mind plan so different, structurally, from the body plan? The mind does not appear to be organized into parts that are counterparts of other parts, with built-in redundancy, the whole working harmoniously together: it seems monistic, singular, undivided. We don’t, say, have a left-hand belief that snow is white and a right-hand belief that snow is white, or a left-hand desire for food and a right-hand desire for food. Nor do we have two wills or two selves or two faculties of reason or two language instincts or two moral sensibilities. We have only one of each, as if we had only one eye, one foot, one lung, etc. There is no psychological bilateral symmetry to match the symmetrical architecture of the body and brain. That seems puzzling from a design perspective. Nor do other animals differ: they too possess singular minds devoid of symmetry.

            A number of responses to the puzzle are possible. One would be that symmetry is a geometrical concept and minds are not geometrical objects—so they cannot in principle exhibit symmetry (or asymmetry). As Descartes would say, the mind is not an extended substance, and bilateral symmetry requires spatial extension—parts in space that are duplicates of each other. But this response assumes that mind and body are radically separate; anyone who is less of a dualist will wonder why the spatially constituted mind does not exhibit symmetry (the brain does). Also, we can define analogue notions of geometric symmetry, as we do elsewhere when we apply the concept of symmetry (e.g. in logic to describe certain relations). The abstract notion of symmetry is just that of duplicate parts that work together, and the mind could in principle satisfy that more abstract notion. The mind has parts (beliefs, desires, and emotions, as well as faculties and modules), and so it is logically possible for it to duplicate those parts into a symmetrical architecture. But in fact this logical option is not adopted; it is pointedly rejected. The mind keeps itself stubbornly singular.

            A second response is to question the appearances: maybe the mind exhibits symmetry in its underlying structure. For example, when we use our two eyes we see a single scene, yet the two eyes each send in their own separate images, which are then synthesized by the brain. The visual percept has a surface unity, it may be said, but it springs from a pair of symmetrical images, both on the retina and further into the nervous system. You can use one eye or the other to see, but the eyes work together to give a better image than each alone; nevertheless, the underlying process involves duplication and symmetry. And the same might be said of the ears or nostrils. Or again, more adventurously, it might be suggested that the mind is divided into two symmetrical halves, the conscious and the unconscious. Maybe the unconscious is the analogue of our left-hand side and the conscious of our right-hand side. Or maybe our moral faculty is the combination of an instinctive emotional system and a rational reflective system. The trouble with these suggestions is that the alleged symmetries are metaphorical and unpersuasive. It is true that the eyes provide a plausible analogue of symmetry, but what is striking is that the two images merge in the final percept—whereas the eyes themselves never merge. Visual consciousness does not consist of two separable but duplicate parts, as the body does. A real analogue would involve a creature with two visual images in its consciousness, yoked together somehow. This we never find. As to the other suggestions, it is farfetched to compare the conscious and the unconscious with the left and right hands: these are just two mental systems, not mirror images of each other working together. After all, the conscious and the unconscious are usually supposed to contain different contents, not copies of the same content. Crucially, what is lacking is some notion of pairs of beliefs or desires that act like parts of a symmetrically organized system—as it might be, left-hand beliefs and right-hand beliefs. Do we ever find ourselves going about our cognitive business using only our left-hand belief that the door is open and not our right-hand belief that the door is open? Hardly. It is not that each hemisphere stores its own copy of the same belief, which may be called upon to work together or separately, as with the hands or eyes. We have just the one belief or desire or emotion or intention or mental image. Our consciousness is not divided into two symmetrical halves like our body.

            We can perhaps imagine a symmetrical psychological architecture: we postulate two selves each fully formed and occupying the two hemispheres (think of split-brain patients). Thus in my right hemisphere I have Self1 that has a full set of beliefs, desires, and so on, and a duplicate Self2 in my left hemisphere. These two selves might operate independently or in unison, just like my hands. That seems like a conceivable way to rig up an organism, and it has the advantage that if one self gets damaged or destroyed the other self is still there to carry on the good work. So it is perhaps surprising that such an organism has not evolved (as far as we know). In any case, that is not how things actually are, at least to all appearances: we seem to have a single self, with a single set of beliefs and desires, and with no symmetrical duplicate. In addition, each of these selves would fail to exhibit any internal symmetry, and be essentially just separate selves, not coordinated parts of a single functional entity. What is central to bodily bilateral symmetry is duplication plus unification: not two bodies with only one hand and eye each, but a single body with two hands and eyes. What is hard to imagine is a self (a psychology) that is both unified and bilaterally symmetrical. So there is something conceptual about the question, not merely accidental—we don’t just happen to have non-symmetric minds.

            If we don’t have symmetric minds, do we perhaps have asymmetric minds? For this to be the case, we would need to be able to distinguish sides of the mind that stand in asymmetric relations—analogous perhaps to the claws of those crabs that have one hugely enlarged claw. Nothing immediately suggests itself, though one might draw attention to aspects of the mind that work against other aspects—as it might be, emotion versus reason. The idea of disharmony in the mind is not unheard of, or the idea of an unbalanced mind, or an ugly mind. But these are farfetched metaphors, rather than strict analogues: what we would really need is some notion of parallel systems that exhibit marked architectural diversity, such as two language faculties with different grammars (like two limbs with divergent anatomies). Granted the mind has separate parts, but the concepts of symmetry and asymmetry do not get any purchase on its overall structure. The mind is not like a symmetrical vertebrate body, but it is not like an asymmetrical sponge either.

            So many things in the natural world are symmetrical—from spider’s webs and bird’s nests, to leaves and flowers, to reptiles and mammals, to worms and wombs—and yet the mind itself, in its many incarnations, is not one of them. Except where it engages with the senses in their bilateral symmetry, its organization is, we might say, anti-symmetric: it cancels symmetry. The mind receives inputs from the symmetrical body and sends outputs to the symmetrical body, but it itself exhibits no symmetry, no internal duplication of conjoined parts. It is one-track and single-minded, without even a central axis. It is like a sprawling metropolis with many add-ons and thoroughfares, but no overall symmetrical plan. It is, in one sense, disorganized. This is a puzzle because the mind is a biological product and yet it fails to possess one of the most salient marks of the biological.

 

Share

Science and Philosophy

                                               

 

 

Science and Philosophy

 

 

What does the value of science consist in? There are two possible answers: its usefulness and its interest. Nothing needs to be said about the first, but the second raises the question of what kind of interest science has. Let me distinguish specialist interest and general interest—the interests of professional scientists themselves and the interests of what we may as well call the general public. Funding and the like depend on the general interest of science (putting usefulness aside as too obvious to harp on). So what does the general interest of science consist in? When the average intelligent person takes an interest in science, what explains his or her interest? 

            We can best answer this question by considering the specific kinds of science that people tend to find most interesting. The answer is: astronomy, physics, biology, and psychology. Thus people have been very interested in astronomical findings about our place in the physical universe (the heliocentric theory, the big bang, the distribution of galaxies, etc); physical discoveries about the fine structure of matter (the atomic theory), the general nature of motion (Newton, Einstein), space and time, etc; biological discoveries about the origin of species (Darwin), the nature of inheritance (DNA), our kinship with other animals, etc; and discoveries about the roots of our mental life, the existence of an unconscious (Freud), how our minds relate to our brains, etc. That is, people are interested in findings about the physical nature of the universe (micro and macro), the origins of life (especially human life), and the underlying nature of the mind (especially the human mind). They are not generally interested in the fine details; what intrigue them are the large-scale conclusions.

            We can imagine a species without such interests. Apart from us, all terrestrial species lack any interest in science, mainly for reasons of intellectual limitation; and even if they could be made to understand the questions, there is no guarantee they would find much interest in the answers (though they might). On the other hand, extraterrestrials might naturally and innately possess scientific knowledge and hence take it for granted, finding it nothing to write home about. I think the reason we humans find such things so interesting is that they impinge on our natural conception of ourselves and of the world around us. We experience reality in certain ways, and these ways are limited and partial, also sometimes distorted. We see the stars in a certain way; we sense matter with our various senses; we observe similarities and differences between ourselves and other animals; we are aware of our own mind and its peculiarities. Scientific knowledge expands and sometimes questions our ordinary conception of things. Science is interesting to us because it bears on the big questions raised by our ordinary consciousness of the world: we want to know whether that ordinary consciousness is accurate, complete, and objectively correct.

            But doesn’t that sound a lot like philosophy? Isn’t that what philosophy is about? Philosophy is about the big questions: the universe, life, mind, and everything—especially the status of our ordinary conceptions of things. It is because we have such general philosophical interests that we find the contributions of science so interesting. We find that science helps us with our philosophical concerns, as these concerns spontaneously arise in us. If we had no such concerns, science would be of interest only to specialists and those devoted to its usefulness. We are interested in science in the way we are because we are philosophers—because we ask big questions about reality and dwell on our own natural awareness of the world. Are we the center of the universe? It feels as if we are—but astronomy teaches us otherwise. Is matter as solid and continuous as it looks? Atomic physics teaches us otherwise. Was life created by a superior form of intelligence? That seems likely on the surface, but it turns out that life arose by a succession of mindless accidents. Is the mind limited to what we are conscious of? Psychology teaches us otherwise, since an unconscious needs to be postulated. These are broad philosophical questions, and science is interesting precisely because it helps us answer them. It is the implications of science for these very general questions that seizes the attention of the general public—not so much the nuts and bolts of the scientific theories. We are moved by the significance of science for the big philosophical questions.

In particular, we perceive how science bears on our ordinary consciousness of the world—our perception and our common sense. We are thinking of this ordinary consciousness whenever we are gripped by a piece of science: we are comparing the way we naturally experience things and the way science tells us that they are. So we are reflecting on our own perspective on reality and assessing it in the light of scientific findings. We are thinking such thoughts as, “The world is actually very different from the way it strikes me”. That is a philosophical thought—self-reflective and self-critical. The general human interest in science depends on the availability of this kind of thought. Science is interesting to people because philosophy is (science used to be called “natural philosophy”). Thus science and philosophy are intertwined when it comes to human interest, even if individual scientists and philosophers have little to do with each other.

            The interest of philosophy, then, does not depend on its being an approximation to science; we are not interested in philosophy because it is on the way to becoming science. Rather, the interest of science depends on the existence of a prior philosophical interest. Humankind has been asking philosophical questions for a long time, and answering them in different ways, notably by appeal to religion, tradition, and revelation; but it has turned out that science is the best way to answer these questions. The questions themselves pre-date science, or even any conception of science. We would have them even if science had never been invented. Science would not have the same interest for us if it were detached from these ancient questions—they are what give science the human interest that it has. If science were valuable only because of its practical uses that would be a very different state of affairs: but actually it engages with our deepest questions about the world and our place in it. The value of science thus partly derives from its aspiration to meet our philosophical needs. Bluntly, science is interesting (to non-specialists) only because philosophy is.

            None of this is to say that science can answer our deepest philosophical questions; it is only to say that people find science interesting because of the hope that it may. When people are fascinated by the big bang theory because it appears to explain where the universe came from, it may be that they have other origin questions in mind—such as how anything at all came to exist. The big bang theory does not in fact answer those questions (what existed before the big bang?). My point is that the intellectual value of science (as opposed to its practical value) for most people depends on the belief that the big philosophical questions can be answered by science. Some of these big questions can be answered by science (whether we are at the center of the universe, how animal life evolved), but some cannot (whether we have free will, whether we can really know anything). In either case the general public interest in science reflects a thirst for such answers.  [1]

 

  [1] If people were to stop asking the big questions, abandoning philosophy altogether, perhaps at the urging of scientists, the result would be that science would lose its general interest, becoming merely a subject for specialists and practical application. Scientists need philosophers in order to maintain their appeal. (This would all be clearer if we didn’t make such a sharp distinction these days between science and philosophy.)

Share

Proving the Self

 

Proving the Self

 

 

Is it possible to prove that the self exists? First let’s consider the existence of material objects, particularly the relationship between property instantiation and object-hood. An ordinary material object, such as a table or a bee, does not instantiate a single property but a collection of properties—a cluster of properties. Search the world high and low and you will not find a material object that confines its attentions to a single property; there is always co-instantiation as well as instantiation. There is always a type of fullness to the cast of properties that any material object instantiates. Without that the alleged object is too thin to count as a genuine object; it is nothing but a property attached to a location. If the property were to cease to be instantiated, the object would cease to exist, so that identity through change is ruled out. That is not our notion of a material object: an object can lose some of its properties and still exist, simply because it has other properties to anchor its existence. Co-instantiation of properties is constitutive of the existence of an object: an object is a unification of properties, a co-presence, an integrated cluster. Nothing is a genuine object if it restricts itself to a single property: a flying object, say, is never just a flying object; it will also have such properties as weight, size, shape, color, wings, and bristles. If someone were to claim that there could be a flying object that has no further properties, we would doubt that the word “object” was the appropriate word. The object-hood of a flying thing (such as a bee) requires the presence of attributes other than flying (and whatever attributes logically follow from this). Always and everywhere material objects, properly so-called, come attached to a cluster of properties, typically quite rich; they are not merely instances of a single property (trait, attribute). We could say they have a multi-dimensional nature.

            Now consider the Cogito: “I think, therefore I am”. We are being asked to accept that if something has a certain trait, viz. thinking, then it exists as a thinking thing. We can move from the instantiation of a property to the existence of a thing that does the instantiating: if there is the property of thinking, then there is a thing that thinks. Or again, if there is the activity of thinking, there is a thing (object, substance) in which the activity occurs. This move has always been found problematic: why does the process of thinking entail a thing that underlies the process?  [1] Mere thinking is too exiguous a basis to ground the ontology of selves. The point could be put as follows: why should we interpret “I think” as entailing predication of an object rather than being merely a feature-placing sentence? If the sentence “It’s raining” is true, then a certain feature is present at a certain place, but it doesn’t follow that there is a raining thing: there is no object that instantiates raining, just a certain meteorological activity occurring at a certain location. Similarly, it may be said, the Cogito is only entitled to the claim that there is thinking going on at a certain location (“It’s thinking here”) not the claim that there is a thinking thing, i.e. a self. Feature placing does not entail object-predication. Activity does not entail an agent. Process does not entail a substance. You can’t derive a something from a doing. Thus the traditional Cogito is invalid: the premise is too weak to support the conclusion. It takes an ontological leap across a logical chasm.

            I suggest that the problem here stems from the attempt to derive the existence of a thing from the instantiation of a single property (trait, activity, dimension). This is simply not rich enough to ground the notion of a thing; as with material objects, we need to add a range of properties co-instantiated with the given property. If that were not possible—if all we did was think—then indeed we would not be entitled to talk of a thinking thing; feature placing would be the preferred interpretation of the “I think” of the Cogito. The inference “It rains, therefore there is a thing that rains” is clearly not valid; and the skeptic would be well within his rights to insist that the “I think” of the Cogito should be regarded similarly. But actually we have resources beyond the usual thin interpretation of the Cogito: for not only do I think, I also feel, sense, imagine, and will—among other things. That is, I instantiate many psychological traits that are not entailed by thinking as such: I am a bundle of traits not just a single trait. But this kind of clustering is exactly what grounds talk of thing-hood. Thus I exist as a thing because I manifest a cluster of mental traits. Co-instantiation is what justifies the move to thing-hood, not instantiation singly considered.

I propose, then, what may be called the “expanded Cogito”: “I think and I feel and I sense and I imagine and I will, therefore I am (a conscious thing)”. My status as a conscious thing, not merely a congeries of free-floating mental activities, turns on the fact that I (a single entity) instantiate all of them. The mental properties are instantiated by the same thing, and that thing is precisely a thing. What is doing the work here is the clustering not the constituents of the cluster: you can’t derive the substantial self from the properties in the cluster considered singly, but you can derive that self from the fact that they form a cluster. Talk of things is precisely a way to register such clustering; without it all we have is the distribution of features at locations. The Cogito needs beefing up from the latter to the former, and we have the resources with which to accomplish that. I am a thing because I am many things. The traditional Cogito imputes too little structure to psychological self-attribution, regarding it as essentially one-dimensional (“thinking”), and as a result fails to sustain the assertion of thing-hood (and hence of the self). The cure is to recognize that multiple simultaneous attributions are true, and hence we have the ontological basis necessary for a claim of thing-hood—just as in the case of material objects.

            And it isn’t merely that multiple attributions are true; we also know them to be true. Not only do I know that I think and know that I feel (etc); I also know that I am all these things simultaneously. Thus I know the psychological fact that grounds my claim to thing-hood: I am presented to myself as a combination, a clustering, a bundle. This means that I can use that fact in proving the existence of my self: I know that I think and I know that I feel, but I also know that I think and feel—I know that I have many psychological traits at the same time. So I know what is necessary to infer the conclusion of the Cogito. Perhaps this is why we tend to go along with the Cogito on first hearing without analyzing it too closely: we have the resources to make it come out valid, so we don’t notice that the usual formulation leaves it vulnerable. I know myself to be a unitary thing because I am aware that I combine a number of separate psychological traits—that I am center of instantiation. I am aware of myself as a centered cluster of attributes and that’s why I assent to the thesis that I am a thing—not simply because I am aware of my thoughts in isolation from other psychological traits. Thus we read the traditional Cogito in the light of the expanded Cogito. In any case, I know of my simultaneous instantiation of distinct psychological properties as well as I know of my instantiation of those distinct properties, so I know what is necessary in order to provide the desired proof.

            This account of the epistemology of the self differs from other accounts. It differs from the traditional Cartesian account because it locates the operative premise in the clustering not in the items clustered, but it agrees that knowledge of the existence of the self is inferential: there is a definite move from “these properties are clustered together” to “there is a thing that underlies the cluster”. So the account does not postulate direct knowledge of the existence of the self: that is, we can provide a discursive proof of the self, as it is understood in the Cogito. Nor does the account ground knowledge of the existence of the self on some kind of immediate impressionof the self; the only impressions here are of specific mental traits (and possibly of the fact that they come in clusters). The correct way to reconstruct the epistemology of the self in the Cartesian style is via the expanded Cogito, and that has the form of an inference. My knowledge that I exist is therefore not like my knowledge that I think: my knowledge of the latter admits of no proof, being immediate, while my knowledge of the former does admit of discursive proof—even if I never explicitly go through such proof in my daily life. This seems intuitively correct: the self really isn’t a given in the way the contents of the mind are. There are intelligible forms of skepticism about the self, and thus we need a proof to combat them. The traditional Cogito came close but foundered on the objection from thinness; the expanded Cogito gets over that objection. We can thus reasonably argue that we exist.

 

Colin McGinn  

           

  [1] This is usually called “the Lichtenberg objection”: how do we move from events of thinking to a substantial thing that thinks? But the objection had already been made by Gassendi and others. The question may be put as follows: how do we justify the Cogito without presupposing a scholastic metaphysics of substance and accident?

Share

Principles of Radical Interpretation

                                   

 

Principles of Radical Interpretation

 

 

How should we set about interpreting an alien language and the people who speak it? Specifically, how should we ascribe beliefs to others? One idea is that we should use a principle of charity: ascribe to others the beliefs that we have, so that interpreter and interpreted converge in their belief systems. This principle will apply both to logical beliefs and beliefs about ordinary matters of fact. Much has been written about the principle, and I won’t repeat any of that here. Instead, I will give a simple but instructive counterexample. Suppose I encounter an alien tribe in the deepest jungle and present members of the tribe with a cell phone, and suppose they have never seen a cell phone or any other electronic device. They will clearly not form the belief that the object in front of them is a cell phone—they have no such concept. They have no beliefs about cell phones or any similar technology. It would be absurd to appeal to the principle of charity in ascribing such a belief to our natives (perhaps what they believe is that the thing they are holding is a piece of a star fallen from the sky). Sometimes it is suggested that the principle of charity should be amended to a principle of humanity, which prescribes that we should ascribe the beliefs we would form if we were in the epistemological position of the native. Thus if the native is presented with a visual illusion, not known by them to be such, we should ascribe the false belief appropriate to that illusion, not the true belief that we have in knowing about the illusion. But this won’t help with the cell phone case, because presenting the native with a fake cell phone will not warrant assigning to them the belief that the object is a cell phone (as opposed to a fake cell phone); for again, they have no such concept.

            What if they are presented with a familiar object, say a rabbit—should we say they believe it is a rabbit, thus sharing our belief? Not necessarily, since they may have beliefs about the animal in front of them that excludes them from having the zoological concept rabbit: they may believe of rabbits that they are gods not animals. They do not share our beliefs about rabbits. The problem is that they believe that this thing is a god, not an animal, and hence they don’t apply the concept rabbit to the thing in question. Maybe they have the concept rabbit and apply it to certain kinds of rabbit only; when it comes to white rabbits, say, they withhold the concept, since these creatures they take to be gods, not animals, and hence withhold the rabbit concept. So we can’t ascribe a belief to them based on what we believe. We can’t use ourselves as the yardstick of their beliefs, since they differ radically from us about what the world contains. What if they are convinced skeptics who never believe that anything they experience is real? They don’t even share our belief that a square object is in front of them, let alone our general beliefs about nature, the weather, world history, the good, and the beautiful. Charity will get us nowhere with these independent thinkers, just as it will get them nowhere with us.

            So how can we interpret them? The difficulty applies even to logical beliefs: what if they subscribe to a deviant logic? We obviously can’t interpret them as holding to our logic—they may be convinced intuitionists or even adherents of para-consistent logic. We surely don’t want to say that no one can believe in a deviant logic, or that a deviant logician cannot be interpreted as holding the logical beliefs they in fact hold. We need a way to ascribe divergent logical beliefs, as we do divergent factual beliefs. No theory of interpretation that requires belief convergence between the interpreter and the interpreted can be correct. We need another principle entirely. I propose what I shall call “the principle of culturality”, for want of a better label. The general idea is that we need to take account of the material and cognitive culture of the people we are trying to interpret. Thus there are no cell phones in the material culture of our earlier tribe and their religious culture decrees rabbits to be gods. In order to interpret a people we need to look at their technology, life-style, interpersonal relations, and so on: do they have agriculture, maps, idols, money, advanced tools, animal sacrifice, and so on? This is all observable and accessible before we have made any belief attributions. We must also take note of their ecological niche, the acuity of their sense organs, possibly their genetic make-up, as well as their general level of sanity (interpreting a schizophrenic will require special methods). We will then ascribe to them the beliefs that all these factors suggest—and these beliefs may diverge dramatically from ours. They may be constantly hallucinating on drugs, possessed of only the most rudimentary tools, incurably superstitious, logical nihilists, and completely un-self-critical. In other words, we need to do serious empirical anthropology. Merely recording the stimuli that trigger their assent behavior isn’t going to cut it.

            Someone might object that the principle of culturality is not a rule like the principle of charity. That is quite true: it just tells us to take everything observable into account, particular the totality of the people’s culture. The principle of charity, by contrast, can be applied without any knowledge of the particularities of the natives’ culture—we simply ascribe what we ourselves believe, knowing already exactly what that is. This will work for any subject of interpretation, no matter their culture. But that is totally unrealistic, since people can differ enormously in their beliefs: beliefs are inherently protean in nature. Instead, we should adopt a far more context-sensitive and multi-dimensional approach, recognizing the extreme flexibility of belief: people can believe just about anything if they put their mind to it. We cannot sidestep the complexity of interpretation by adopting a simple rule like the principle of charity; and we may have to accept considerable uncertainty as to what the other believes. There may be para-consistent idealist creationists out there who don’t even believe in rabbits and square objects. They are no doubt mistaken, but they are not conceptually impossible.

            There are also children, animals, and Neanderthals—all of whom need interpretation. The principle of charity will not be of much help with them, since they will not necessarily agree with normal human adults in the beliefs they form. What about the denizens of Plato’s cave or intelligent underground worms? The case should be compared with ascertaining the chemical composition of a distant planet. It’s no use assuming that a distant planet will necessarily resemble the earth in its chemical composition, on the principle that we have no alternative than to use our local environment as a model of any environment (as if all planets must converge geologically with earth). Instead we must observe the environment local to the planet being investigated, by spectral analysis and whatever other data we can glean—we look at the peculiarities of the planet itself. If we followed a principle of “charity” in astronomy, assuming similarity between every celestial body and our own planet, we would end up with a hopelessly uniform account of astronomical reality. Not every object out there is the size and mass of the earth, with the same quantities of basic elements and geological structure. Not all planets have the same “planetary scheme”.

            And the same point applies to radical interpretation directed inwards. We also seek to discover what goes on in our unconscious—we need a way to figure out what beliefs and desires exist there. For the sake of concreteness, let’s accept a Freudian account of the unconscious: how shall we find out what we unconsciously think and feel? One view would be that we should use a principle of charity: we unconsciously think and feel what we consciously think and feel. But what is striking about Freud’s unconscious is how much it is supposed to differ from our conscious life; so charity would get things quite wrong. Instead, we need to adopt a more circumspect and holistic approach, appealing to free association, dreams, neuroses, jokes, slips of the tongue, and the early family dynamic. We need to look at how the unconscious manifests itself in our lives—as we need to look at the way other people’s minds manifest themselves in their lives (particularly in culture). Using a principle of charity will not do justice to the variety of minds. The basic assumption of that principle is that there is uniformity of the mental across all peoples and all types of mind: we all believe and desire pretty much the same things. So we could generalize the principle of charity into a “principle of uniformity”: all minds are pretty much the same—the same as ours, that is. And this is not an empirical discovery (as with linguistic universals) but a methodological requirement: we can’t interpret unless we assume uniformity.  [1]

This is like a principle of uniformity for planets: all planets resemble the earth. Granted, there may be some similarities between the various minds and the various planets, but they won’t be as great as the principle of uniformity supposes. We can’t sidestep empirical anthropology and astronomy by announcing an a priori principle that guarantees that everything resembles the local conditions. Just as there can be different “astronomical schemes”, so there can be different conceptual schemes. Surely no one would advocate a principle of uniformity in biology, according to which every species must have the same basic anatomy and physiology as humans—there is clearly great variety in animal bodies. We don’t do zoology by consulting our own bodies and then assuming every body is built like ours; we have a look at other bodies and find out their individual structure. Radical interpretation is no more conducted from the first-person standpoint than radical zoology is. And it is perfectly conceivable that a group of believers disputes every opinion we hold, from what is in their immediate environment to the general nature of the universe. Things differ from place to place and any method for discovering how things are needs to respect this variety. Projecting our own mind into the mind of the alien other is not the way to further human understanding.

 

  [1] It would be different if it were a natural law that everyone shares the same beliefs, but that is not the reasoning behind the principle of charity (and is very implausible); the idea, rather, is that charity is the only viable method of belief attribution.

Share

Parables in Philosophy

                                               

 

 

Parables in Philosophy

 

 

Parables have their uses and merits in philosophy, even in these desiccated days. They can impart vivid life to elusive abstractions. Plato’s parable of the cave is the most famous philosophical parable, and it is powerfully memorable. It has its defects: notably, the cave wall and the shadows cast on it are just parts of the same empirical world that may be encountered outside the cave—so the cave dwellers are in touch with Reality even while stuck inside the cave. What the lone escapee encounters is just more of the same—perceptible material objects (and there are shadows outside the cave too). There is supposed to be a stark contrast between the world of the cave (empirical reality) and the world beyond the cave (the world of forms), but in the parable the two worlds are made of the same materials. In both places light falls on objects and we see them. Still, one gets the point. The idea is to dramatize the difference between the world of the forms and the world of perceived particulars—the former being more real than the latter. Part of the thought here is that the cave world is more limited than reality as a whole—the cave dwellers are mistaking part of reality for the whole of reality. There is more to reality than they imagine, given the limitations of their experience.

            Plato has his parable, his illustrative myth, but where is Aristotle’s countervailing myth? What would be a suitable parable to explain Aristotle’s non-Platonic conception of reality? He rejects Plato’s world of forms, holding that reality belongs to the world of perceived particulars. Without going into elaborate exegesis, Aristotle represents the world of the nominalist or conceptualist—universals are at best reifications of words or concepts. What parable might capture this anti-Platonic position?  [1] The best I have been able to come up with I call “the parable of the tank”, as opposed to the parable of the cave. Here we are to imagine humanlike creatures floating in a tank, a very large tank. Their senses have been shut off, or perhaps they have never had senses (we can tell two versions of the story): they never perceive the empirical world of concrete particulars. However, they hear a voice that is piped into their brains day and night: it is the voice of Socrates speaking to them. They hear nothing else but this disembodied voice; and it is a dulcet and persuasive voice. It speaks lovingly of geometry, of the abstract world, of permanence and perfection, of the Good. It elicits in them knowledge of these things, as with Socrates and the slave boy, and it conveys a reverence for the world of which it purports to speak. The people suspended in the tank fully absorb this discourse—they believe in what the voice of Socrates tells them. They think that the world described by Socrates is the real world; they know nothing of the world of empirical perception, not even suspecting that they are concrete particulars floating in a tank (we can suppose that they can communicate among themselves, and perhaps also with Socrates). In their minds the real is co-terminus with the world described by the voice—basically, the world of Platonic forms. After all, they have experienced nothing else, and the voice has assured them that nothing else is real, even if it might occur to them in a dream.  [2]

            But one day one of the tank dwellers escapes: he manages to climb out of his tank and swim to dry land. By some natural magic he is also provided with senses, particularly sight. He sees his first physical objects, he touches them too; he even tastes them. No doubt this is all a great revelation, catapulting him into a brave new world of experience and knowledge. The world is not just the voice of Socrates and those abstract forms; it consists also of concrete solid particulars! At first he is overwhelmed, stunned, even maddened; but he quickly adjusts, concluding that his earlier life in the tank was severely limited—a condition of extreme ignorance. There is clearly much more to reality than he ever suspected. He even entertains the suspicion that this new world is more real than the etiolated world described by Socrates: those vaunted forms were mere shadows compared to these bright and shiny particulars, these solid chunks of matter. He resolves to re-enter the tank and report his findings to the other tank dwellers, wishing to enlighten them. But when he does so he is met with incredulity and derision: the others just don’t believe him, regarding him as unhinged or a con man. And indeed, he can see their point: the world out there has to be seen to be believed—he would not believe it unless he had witnessed it with his own eyes. He finds himself shunned and distrusted, even though he alone is in possession of the truth.

            Suppose Plato had taken the teaching of his philosophy to an extreme, equipping his Academy with a special Socratic tank. Since the senses are so misleading, he would abolish them—for they are sources of error and confusion. In the ideal Platonic state education would proceed by installing newborns in the tank, removing their senses, and hooking them up to recordings of the voice of Socrates (don’t ask me how he obtained all this technology). Thus he could inculcate sound philosophy in the minds of the young, later to become the Guardians, without the distractions of the senses: he could educate the polis in the subtleties of Platonism. Let’s imagine he has done this for several centuries, so that Platonism is simply orthodox among the educated Athenian. From Aristotle’s point of view, these tank dwellers are in the same condition of ignorance that Plato diagnosed in his cave dwellers. If one of them were to escape and experience the real world of particulars, he would be treated as a dangerous subversive, or as mad. The parable of the tank, like that of the cave, illustrates the condition of those who cannot recognize the reality of anything beyond their limited experience. And the rhetorical force of the parable is that we can all see that there really is much more to reality than those in the tank suspect—just as we can all see that there is much more to reality than Plato’s cave dwellers suspect.

            Aristotle could also appropriate Plato’s parable and turn it against him, by suggesting that the shadows cast on the cave wall might suffice for learning abstract geometry, but that they would not inform the cave dwellers of the world of concrete things that exist beyond the cave. The shadows act as geometrical shapes—circles, rectangles, triangles—and can therefore provide a basis for knowledge of abstract geometrical forms (Euclid would have thrived in this learning environment). But an escapee from this impoverished geometrical world would discover that there is much more to reality than geometrical forms: there is color, weight, hardness, smell, and taste. Plato has described a world in which a single type of form takes up the entire mental space of its inhabitants, but this world is just a small part of all there is in reality—though they, with their limited experience, cannot appreciate that fact. Plato is thus hoist by his own parable.

            Here is another parable, inspired by Plato, but designed to make a different philosophical point. We are to imagine a race of beings that live in a black and white world but who have a color projector installed on the front of their head. When they direct their gaze forward the projector sends out a pattern of light that makes the objects around them appear to be colored. There are no colored objects there, but the projector projects colored light onto things (we can suppose that there is no sunlight to interfere). Thus these beings arrive at the belief that they live in a world of richly colored objects, though in fact they do not. The world is less than they suppose, not more; theycontribute the qualities they appear to see in things, not objective reality itself. If they knew about the projector installed on their head, they might question their naïve belief, but we can suppose that they do not. Now one day one of these individuals has a malfunction in his projector (it has never happened before), the result of which is that his world becomes abruptly black and white. Has he become color blind, unable to see what is in front of him? No, he has for the first time seen reality for what it is—colorless. He might investigate the matter and discover the existence of the projector; he correctly concludes that he has been projecting the color all along and mistaking it for reality. He resolves to inform his fellows of his discovery, in the interests of objective truth; but he encounters resistance and hostility—people are reluctant to accept that their familiar world is chromatically impoverished. When he points out the projectors fixed to their heads they insist that these are just ornaments of nature having nothing to do with how they see things. He decides to leave his own projector disabled, in the interests of keeping his perceptions in line with objective reality, leaving others to their fond delusions.

            The point of this parable is to dramatize the doctrine of projectivism—about color, clearly, but also about other allegedly projected features of the world (smell, taste, moral and aesthetic qualities, and so on). The mind is our natural projector, tricking us into believing that things that originate with us belong to reality independently of the mind. If the mental projector were to cease to function, we would be confronted by a reality devoid of projections—a thinner and lesser reality. Those who reject projectivism, it will be said, are like the folks in the parable who refuse to accept that they have a projector stuck to their head. Just as Plato’s cave dwellers are by hypothesis ignorant about the true state of things, so these projective beings are by hypothesis ignorant about the true state of things. The parable makes vivid and memorable a philosophical doctrine, contributing to the doctrine’s rhetorical force. They are like those parables in the Bible that dramatize some moral predicament or precept: stories with lessons attached. They aid teaching and comprehension.

 

Colin McGinn

  [1] It is true that people, with very few exceptions, are not natural Platonists, so Aristotle is not opposing a natural human tendency, as Plato takes himself to be. But we can imagine a tribe that accepts Platonism from childhood on, as a matter of course: empirical particulars are not real, there are eternal universals existing in a transcendent realm, and so on. Such a tribe might be jolted by a parable that compares them to woefully ignorant people.

  [2] A variant parable might tell of godlike beings dwelling among the forms, contemplating and revering them, but never suspecting that there are such things as particulars that might instantiate them—that concept is alien to their experience. They might be surprised to discover that their pure and beautiful forms compromise themselves by mixing with the tawdry world of transient particulars. Here Platonic heaven functions as the cave: it is a place that limits knowledge.

Share