Identity of Selves

Identity of Selves

 

It is plausibly urged that there can be no identity without identity conditions (“criteria”): for example, material objects are identical in virtue of being spatiotemporally coincident, or sets are identical if and only if they share their members. Likewise, we could say that distinctness requires conditions (“criteria”) of distinctness: no two things can be distinct without this distinctness consisting in something, such as difference of spatial location or diversity of set membership. Distinct things must be distinguished by something, in virtue of something, as a consequence of something; they can’t be just barely distinct, primitively so, inexplicably so. This principle meets with no ready counterexample: material objects are distinguished by their location, sets by their members, events by their causes and effects, numbers by their position in an arithmetical series, hairstyles by their shape, and so on. But what about selves—what distinguishes them from each other? What does the (alleged) fact of distinctness consist in here?[1] We normally think we are distinct selves from other people (as we say), but what kind of fact is that: can it be seen and heard, can it be detected, can it be conceived? It is easy to appreciate that it can’t be spatial location, because selves don’t have (definite) spatial location (save derivatively on bodies); and if they did it need not coincide with the location of the body or brain (there could be several selves in one body or brain and a single self spread across several bodies). It is not even clear that selves could not all occupy a single location. Nor do selves have the identity and distinctness conditions of sets or numbers or hairstyles. They need their own sui generis identity and distinctness conditions. We can’t say they are distinguished by overall mental state (including character and personality), because distinct selves can have identical overall mental state and because mental state changes over time (and across possible worlds). Intuitively, the self (ego, subject, soul, “I”) is a transcendent entity not reducible to any other category of thing with its distinctive identity and distinctness conditions. It is a kind of vanishing point, a pure locus of awareness, an indefinable something, not even in the world. This entity is not distinguished from others of its kind by anything perceptible, or even thinkable. It is thus a prime candidate for the null identity condition: selves are distinct from each other just in virtue of being selves—primitively, inexplicably. They may indeed be the only entities in reality with this property—the bare-distinctness property. My self differs from your self just in virtue of being a distinct self; nothing further can be said. That would certainly be surprising and anomalous, but (it may be claimed) it just has to be accepted: when I look at another person I must say to myself, “That self is not identical to this self (me), but I have no idea what makes them non-identical”.[2]

            But there is an alternative to this unsatisfactory conclusion, namely that selves are not distinct. We don’t even know what it would be for selves to be distinct. We talk this way, but we don’t know what it means. Rather, what it is to be a self is to be the only self, as a matter of conceptual necessity: for there is no coherent concept of self non-identity. We know what bodily non-identity is, or brain non-identity, or overall mental state non-identity: but we don’t know what it is for selves to be non-identical. There is simply no fact that could constitute such distinctness. No fact that we can produce adds up to the alleged fact of self-distinctness. The only proper conclusion then is that there are no such facts, and hence no such thing as a plurality of selves. Compare God: suppose someone maintains that God could have a twin or a very similar God-like brother. Theological scruples aside the problem with this suggestion is that there is nothing for such a distinction between gods to consist in, since God does not exist in time and space (or is arranged in a certain position in a series of gods). Any being like God would have to be God, because the grounds of possible distinctness don’t exist where God exists. Nor has anyone ever supposed otherwise (the Greek gods existed here on earth): God isn’t a spatial being so his distinctness from other gods couldn’t consist in a difference of spatial location. It is the same with the self: spatial separation can’t be the ground of self-distinctness (this is most obvious when we consider dualism). The difference is that we can perceive the bodies of human selves but we can’t perceive God’s body (he doesn’t have one), so we easily slide into self-pluralism for human selves but not for the divine self. But human selves don’t admit of plurality either since they have no conditions of identity and distinctness.[3] It is impossible for selves to be distinct—there can be no such fact. The single human (and animal) self has many different states of which particular creatures may be conscious, but these states have but one subject, which participates in the life of each creature. Believers in metempsychosis think that a single self can exist in different animal bodies over time, so that each animal shares a single self; the same thing could be held about the selves (sic) that exist simultaneously in human and animal bodies—there is just one despite an appearance of multiplicity. The various animal forms mask the identity of the reincarnated subject for believers in metempsychosis; according to self-monism, the diversity of bodies masks the underlying identity of conscious subject. And this means the necessary identity of conscious subjects, since selves constitutionally have no identity and distinctness conditions—they must therefore be all one. Appearances must be illusory; or else there are no such actual appearances, just a metaphysical prejudice. If you ask the man or woman in the street whether he or she is identical self-wise with other people, you are not likely to get a firm reply. True, people distinguish themselves from each other according to material-object and individual-organism criteria, but do they consciously think that their innermost self is ontologically distinct from other such selves? Maybe they could quickly be brought round to the self-monism doctrine (apparently it is widespread in the East). After all, the principle that difference requires differentiation has the look of self-evidence; and no one thinks it’s easy to say what a self is, or where one begins and ends. At any rate, it would be too strong to say that a belief in a plurality of selves arises simply from the operation of the senses like other illusions: it really doesn’t look to me as if I am not the same self as you (contrast the lines in the Muller-Lyer illusion). Perhaps we can be said to have discovered by philosophical argument that all selves are one and the same, but this may just be a piece of knowledge added to a previous agnosticism or simple lack of interest, not a revision of what the senses tell us about the self. Admittedly, according to self-monism it is true that we marry ourselves, love and hate ourselves, compete with ourselves, help ourselves out, or harm ourselves; but we need not regard ourselves as otherwise identical, because the same self can have a different body, personality, and lifespan. Your spouse may indeed share your self but not your body and your body’s history—these are distinct from yours. The self is a pretty rarified and obscure thing, so it won’t matter much practically whether other people share yours or not—though the felt gulf between oneself and others (human and animal) may well strike us as less wide and sharp under the new dispensation. We make errors of identity all the time (as Frege reminded us): this one is just more metaphysical than most, and therefore less practically important. Depending on temperament, it may gladden the heart or wound one’s pride (the prince is the same self as the pauper, the judge and criminal likewise). I myself welcome a deeper kinship with animal selves, while finding my identity with other human selves mildly disagreeable, but that’s just me. What is most startling perhaps is that this state of affairs could not be otherwise: it is built into the nature of selves that there can only be one of them, simply because there is nothing (no fact) for the distinctness of selves to consist in. I really don’t know what it would be to be someone other than me.

[1] I put it this way to remind the reader of the Kripke-Wittgenstein discussion of meaning. Kripke asks what fact could constitute meaning and comes up with nothing; similarly we can ask what fact constitutes the distinctness of selves and we come up with nothing. Therefore, it may seem, meaning doesn’t exist and selves don’t either; but we can save meaning and the self by adopting a radical revision in how we think of them, as will become apparent. We avoid a “skeptical paradox” by rethinking our habitual conception of the things in question. In both cases, we give up the picture of the isolated particular.

[2] This would be like adopting an irreducibility view of meaning in the face of the Kripke-Wittgenstein challenge: it is just a brute fact that one self is numerically distinct from another, as it is a brute fact that “+” means addition. That kind of response may have some plausibility for meaning, but for the self is runs hard up against the principle that there can be no distinctness without distinctness conditions (which holds for even the simplest kinds of entity such as elementary particles). Not for nothing has the self been deemed peerlessly problematic. We don’t even know how to count them! It’s a lot simpler to think there is just the one.

[3] The closest analogy I can find is universals: what makes one universal distinct from another? Not spatial location obviously, and not position in a series, or membership, or shape: but we can say that universals differ when they admit of different instantiations—or else they would indeed collapse into each other. The self seems unique in its lack of differentiating conditions. Solipsism turns out to be true, but not in the way it was originally intended. (Another option, of course, is that selves don’t exist at all.)

Share

Djokovic

So the Australians have shown themselves even stupider than the Americans. I blame the British. 

Share

One’s Own Mind

 

 

One’s Own Mind

 

Several times in The Basis of Morality Schopenhauer remarks on the mysterious nature of compassion (or altruism). He says: “When once compassion is stirred within me, by another’s pain, then his weal and woe go straight to my heart, exactly in the same way, if not always to the same degree, as otherwise I feel only my own. Consequently the difference between myself and him is not an absolute one.” (85) This is followed by: “No doubt this operation is astonishing, indeed hardly comprehensible. It is, in fact, the great mystery of Ethics, its original phenomenon, and the boundary stone, past which only transcendental speculation may dare to take a step.” (88) In the Appendix Schopenhauer returns to the mystery and proposes to explain it (though not without introducing further layers of mystery). His basic idea is that there is in fact no deep distinction between what we regard as separate selves: we are all one at the level of Kantian noumena—the “intelligible self” (as opposed to the “phenomenal self”) is a single entity. This means that all so-called compassion (altruism) is a species of egoism, since to benefit others is to benefit the noumenal self that I ultimately am.[1] I am you, so concern for you is concern for myself. He writes: “Now, as regards that side of the self which falls within our ken, we are, undoubtedly, sharply distinguished, each from the other; but it does not follow therefrom that the same is true of the remainder, which, surrounded in impenetrable obscurity, is yet, in fact, the very substance of which we consist. There remains at least the possibility that the latter is in all men uniform and identical.” (136) He goes on to argue that space and time individuate phenomenal selves, but noumenal selves do not exist in space and time, and so are free to coincide according to their transcendent nature. The plurality of selves is an illusory product of the human spatiotemporal form of subjectivity; beneath it a monism of the self reigns. As Schopenhauer remarks, this doctrine of Self-Monism is of ancient origin and distinguished lineage (though more in the East than the West): he mentions the Upanishads, Pythagoras, Spinoza, the New Platonists, and others. Hoping to make the view more palatable and down-to-earth, he compares it to dreams: “For just as in dreams, all the persons that appear to us are but the masked image of ourselves; so in the dream of our waking life, it is our own being which looks on us from out our neighbors’ eyes,–though this is not equally easy to discern.” (141) So-called other minds (selves) are my mind (self) elsewhere.

            This is no doubt a startling, not to say vertiginous, doctrine, hardly calculated to elicit immediate assent; indeed it may appear quite batty. But perhaps it can be given a more quotidian rationale—perhaps we can even argue in its favor. It may turn out to have roots in familiar observations about the nature of the self. At any rate, I propose to inquire whether the doctrine admits of something like demonstration, or at least to break down intuitive resistance to it. Can it be ruled out a priori? Does it describe a logical possibility? Is it upon closer examination a rather natural view to adopt when once we take the true measure of the self as ordinarily conceived? Let us begin with a datum: when one person encounters another the fact of bodily separateness is a presentation of perception (an “impression”), but the fact of personal separateness is not. Your body looks to be separate from mine, at some distance from mine, but your mind doesn’t look to be separate from mine, at some distance from mine. I see your body in space located at a certain position, which is not where I see my own body as located: but I don’t see your mind as similarly located in relation to me. If it is separate from my mind, this separation is not a perceptible fact—not a fact of “intuition”, as Kant would say. Rather, it is matter of inference, of belief, of hypothesis even. It is not a given. So why do we think this way—is it a justified assumption? How can we rule out the idea that what you assume to be another mind is just your own mind in another guise? To answer this question we need to venture out into modal space. Is there a possible world in which a single mind is distributed across a plurality of bodies? That does not seem difficult to conceive: the several bodies, with their accompanying brains, work together to realize a single mind, functionally and phenomenologically unified. There might be coordinating communicative contact between these bodies that keeps them on a single track. Social groups can be functionally unified; well, in this possible world the bodily group houses a single mind. Our brain, after all, consists of a grouping of separable organs occupying different positions—why not conceive of a mind that takes this arrangement a step further? Thus when these bodies encounter each other they are encountering a single mind multiply located. People sometimes speak of a “hive mind” to describe highly social species like bees and ants: couldn’t there be a world in which this is literally true of some organisms–one mind existing in many bodies? Or suppose a scientist was to remove one of your cerebral hemispheres as you slept and insert it into another body: couldn’t you find yourself (unbeknownst to you) talking to someone that literally has your mind in his body? That is, there is now a mind that is partly in one body and partly in another. And couldn’t this process in principle be repeated to produce many bodies sharing a single mind? How do we know this wasn’t done to all of us by super-extraterrestrials years ago? It isn’t logically or conceptually excluded. What if some tech billionaire had his mind uploaded into many different bodies: wouldn’t this be a case of a single mind in many bodies? Couldn’t your unconscious mind be located in a different body from that housing your conscious mind? You might then be able to chat with someone whose brain holds your unconscious mind (yours is already full of your conscious mind). The possibilities are endless. So Self-Monism is metaphysically possible: there could be a possible world in which a single mind shares many bodies. It doesn’t of course follow that our minds could exist in such a form, but it does show that some mind could—the idea is not contradictory or metaphysically impossible.

            What makes minds (selves) distinct? If minds are individual brains, then it is the spatial separation of brains: this kind of materialism thus implies that human minds are a plurality, because brains are. But if dualism is true, then we don’t have spatial separation to deliver the individuation conditions of minds: what then makes suchminds separate? What indeed? In fact, it is hard to see how dualism doesn’t lead to self-monism: for how can disembodied minds maintain their non-identity? Here matters become obscure: what is the principle of individuation for minds under dualism? Causal connections to the body won’t do it, since there is nothing to prevent different immaterial minds from interacting with the same body, or the same immaterial mind interacting with many bodies. In the case of dualism, space can’t ensure plurality, so dualism looks like it will entail the identity of all minds; in any case, something would need to be said to prevent this result.[2] What seems clear is that nothing we know rules out the identity of all human selves: it is an epistemic possibility that this is so. The mind of a fellow creature might be a part of my mind, as my mind is a part of its (human or animal). The relation between my mind and other minds might be like the relation between different regions of the same country: both are parts of a larger whole. When I speak of “my mind” I actually refer to this larger entity—as it might be, the Kant-Schopenhauer noumenal single self, or a vast Humean totality of atomic “ideas”. To be sure, I am not aware of all my mental states, as I am not aware of the states of my unconscious; they are distributed across many sub-minds that together compose the single overarching mind. We can model this set-up on what we know of the individual mind: it too is composed of a number of sub-minds (“modules”) that don’t always have access to each other. The individual mind is really a composite structure—a kind of hive mind. Thus we entertain such notions as multiple personality, hemispheric specialization, separate computational modules, cooperating homunculi, and so on. If the mind of man is already a congeries, why can’t what we think of as an individual mind be made up of smaller minds distributed across organisms? Why make such a sharp distinction between one mind and another if one’s own mind is a mixed bag of sub-minds? Why not believe in the extended mind in the sense of one mind existing across separate bodies? Why let the way bodies look shape the way minds are individuated? Doesn’t this give rise to an illusion of separateness on the part of minds? If we lived in a world in which self-monism is stipulated to be true, it would still look to us as if self-pluralism were true, simply because of the nature of perception of the physical world—we would jump to the conclusion that selves had to be distinct. We tend to think the mind is more unified than it is, based on introspection and perception; and we also tend to think it is more isolated than it is (according to self-monism), based on the way we perceive the world.[3]

This is not to prove the truth of self-monism but rather to break down barriers that rule it out a priori. For it is certainly true that we don’t perceive the truth of self-monism: minds don’t look as if they are all part of a larger mind. That is a metaphysical speculation, prompted in the first instance by the mystery of compassion. It explainswhy compassion is based on sound metaphysical instincts: it is not just as if I am present in the other feeling his pain but I actually am there. He is part of me. In order to prove this we would have to show that the idea of many separate minds is impossible—that minds must form a unity. Such a proof might take the form of showing that nothing could give rise to a distinction of selves, because space can’t do it and nothing else is available to play the individuating role of space. I haven’t given any such proof (and neither has anyone else that I know of), but I have given reason to think that the view in question is not devoid of coherence and motivation. Schopenhauer thinks self-pluralism is a prejudice of the West, arising perhaps from the Christian religion (individual salvation) or the politics of capitalism or too much empiricism; we might be able to shed it by enlarging our imaginative possibilities. Maybe if we tried thinking of other people and animals as parts of ourselves, in order to strengthen the grip of altruism, we would come to find the view natural as well as ameliorative.[4] It might become second nature to us. My mind doesn’t leave off where my body ends but extends into the minds of others in a wonderful mystical synthesis! My mind and your mind are joined at the hip, so to speak. Perhaps the question ought to be: How do I know that your mind isn’t mine? 

[1] If altruism appears less altruistic under self-monism, egoism is less egoistic under a more pluralistic view of the individual self: if each self is a congeries of mental elements, then there is a question as to which of them a given act benefits. Thus we can distinguish between benefiting my higher self or my lower self, my id or my superego. Egoism will be relative to a chosen sub-self, some closer than others to what I regard as my central self. I could be quite cruel towards one of my sub-selves (think of split-brain patients).    

[2] There is a similar question about the individuation of things like angels or gods: how can we make sense of qualitatively identical but numerically distinct examples (tokens) of these types, given that spatial separation can’t do the job? Compare: how do Cartesian minds differ numerically if they are qualitatively identical?

[3] It might also be suggested that our language encourages the illusion of self-plurality in addition to the way we see things: the personal pronouns make it seem as if persons are rigidly distinct from each other. I am not you, sheis not him, and this person is not that person. The words are clearly distinct, but are the denotations?

[4] Likewise the malicious individual might think twice if she believed that the other person is really herself in another guise.

Share

Right and Ought: Schopenhauer on Kant

 

 

Right and Ought: Schopenhauer on Kant

 

In The Basis of Morality Schopenhauer undertakes a wholesale critique of Kant’s moral philosophy. He begins by attacking the very idea of a categorical imperative: morality should not be conceived as consisting of imperatives at all; the concept of the “moral law” is defective; moral rightness should not be analyzed in terms of duties, obligations, or ought-statements; and there are no unconditional obligations anyway. Kant, he thinks, has unknowingly modeled ethics on the Decalogue, which presupposes a “theological ethics”: commands from God, fear of punishment, desire for reward. He writes: “Kant, then, without more ado or any close examination, borrowed this imperative Form of Ethics from theological Morals. The hypotheses of the latter (in other words, Theology) really lie at the root of his system, and as these alone in point of fact lend it any meaning or sense, so they cannot be separated from, indeed are implicitly contained in, it.” (17) A consequence of this assimilation is that ethics emerges as a form of egoism (“Eudaemonism”), since it builds ethics on the (long-term) happiness of the individual. I think Schopenhauer is onto something here and I propose to articulate it in my own way. This has radical consequences for our understanding of the nature of moral value and the way we have to come to talk about morality. The concepts of command, duty, obligation, and ought are not properly speaking moral concepts and should be jettisoned from moral philosophy (!).

            The quickest way to see this is to focus on the imperative as the canonical form of an ethical pronouncement. An imperative utterance has an addressee and an agent, as well as some expectation of reward or punishment (which can take many forms, ranging from approval or disapproval to cash payments or imprisonment). This is part of the semantics of the imperative mood: someone gives the command and there are consequences for obedience or disobedience. Legal propositions are like this. In the case of theological ethics we have God as legislator and enforcer—hence the “divine command” theory. Thus it is in the interests of the addressee (us) to obey the commands, which makes morality a matter of enlightened self-interest. But in fact morality is not like this (true morality), since here we act irrespective of self-interest. So we can’t base morality on these concepts: morality does not issue from commands expressed in imperative form that we must obey or else pay the penalty. God could issue such commands, but without God the idea is pure anthropomorphism. There is no agent of such moral commands and no system of reward and punishment (unlike the law). In fact, as Schopenhauer says, morality has no intrinsic connection to this set of concepts: we have no moral duties or obligations, and there are no moral laws. These concepts belong to theological ethics (or human-based ethics), which is not true ethics. Of course, such duties or obligations can exist alongside ethics, but they don’t form its essence: for we may indeed be rewarded or punished for our response to imperatively expressed commands (by God or by other people). But these facts can’t constitute our moral reasons for acting as we do, on pain of reducing ethics to prudence. The rightness of an action (thought, desire, person) cannot be analyzed as compliance with a moral command. All commands have a conditional or relative structure, being predicated on the existence of a commander equipped with the power of sanction. I obey the command on condition that I want to avoid the consequences of non-compliance. But this has nothing to do with morality proper. Kant’s problem, inherited from the Christian tradition that shapes his moral outlook, and contrary to his intentions, is that he is tacitly assuming a theological conception of ethics, which works from a basis of self-interest. According to Schopenhauer, then, we should separate morality completely from these ideas; it is indeed contradictory to speak of categorical imperatives (imperatives can never logically be categorical) or moral laws or moral duties. That is to try to locate morality in a sphere alien to it—the sphere of commands, consequences, and retributive agents. There is a sharp conceptual separation between the right and good and the imperative form, with all that it implies. There may be a correspondence between the two, but there is no identity, no reduction. We have got used to talking this way because we have lived for so long with theological ethics, but it is the wrong way to talk (to think). It is impossible to base ethics on a foundation of commands.[1]

            It might aid intuition to consider a strange possible world, namely one in which the ruling god is morally imperfect. Suppose the inhabitants of this world are morally superior to their god in respect of moral judgment, so that they find themselves disagreeing with his moral decrees. Suppose, for example, he holds that lying is always wrong, without exception, and will punish you if you infringe this rule. The people in this world realize that this is not morally correct, since there are clear exceptions to such a rule. They will then be confronted with a dilemma: either you obey the god’s rule and thus behave immorally, or you disobey his rule and get into trouble with said benighted god. The god is obliging you to follow his rule (his imperative), but you judge that this obligation is contrary to true morality. If the punishment isn’t so severe, you might disregard this imposed obligation and do what you know is right; if it is severe, you might say “This is morally wrong but I am obliged by my god to do it anyway”. In the same way the law may oblige you to do what you deem wrong—just as your job may require you to carry out certain duties that morally repulse you. This is often the case with what are called contractual duties: you have signed the contract so you are obliged to carry out its commands whether they are moral or not. But you can’t have moral duties or obligations in this sense because there is no enforcer and no contract: morality isn’t an agent with the power to force compliance via the mechanism of self-interest. To suppose otherwise is to commit a category mistake. But what other sense is there? Is it that “duty” and “obligation” are ambiguous, sometimes meaning compliance to an authority and sometimes not? No, we have just got used to thinking of morality in terms of theology, where God is conveniently deemed morally perfect. Then what God commands and what is right never come apart: but that doesn’t imply that we can analyze right and wrong in terms of the concepts of duty and obligation. God may issue infallible imperatives, but the nature of our moral reasons is not analyzable in terms of conformity with these imperatives. But there is no other notion of an imperative; morality itself cannot issue imperatives, save perhaps metaphorically. It may be as if morality commands you to do this and not do that, but it cannot literally do any such thing. So it is a category mistake to suggest that moral rightness is conformity with a categorical imperative (a contradictory concept if Schopenhauer is right). When I say of an action that it is right I am not saying that it is obedient to a categorical imperative (or a hypothetical imperative): that is to import into morality an alien conceptual structure. This is Schopenhauer’s opening criticism of Kant, and he would appear to have a solid point. But the point extends far beyond the details of Kant’s system; it applies to the entire apparatus of duty, obligation, requirement, prescription, law, edict, and directive. Action does indeed follow from morality, but not because morality somehow commands or demands it—morality is not the kind of thing that can do such things. Proper conduct might be entailed by the Good, to put it Plato’s way, but not because the Good (an impersonal being) issues imperatives. After all, imperatives are speech acts, and moral values don’t speak. Neither do they oblige in any ordinary sense (“I was obliged by my hosts to take my shoes off before entering”), so how can they impose obligations?[2]

            How does this point apply to “ought”? Schopenhauer says: “What ought to be done is therefore necessarily conditioned by punishment and reward; consequently, to use Kant’s language, it is essentially and inevitably hypothetical, and never, as he maintains, categorical. If we think away these conditions, the conception of obligation becomes devoid of sense; hence absolute obligation is most certainly a contradictio in adjecto.” (16) Here we may find ourselves losing sympathy for Schopenhauer’s position: surely we can say that we ought to do what is right! But is the connection between right and ought really that tight? In my imaginary possible world the inhabitants might find themselves ruefully reporting, “I know it’s wrong to tell the truth in this case, but I guess I ought to do it anyway or there will be hell to pay from you-know-who”.  And how can “right” mean “ought” in view of the following fact: you can explain why you ought to do something by saying that it is right but not by saying that you ought to do it? Why ought I to give money to charity? Because it is morally right to do so—but not because I ought to (that just repeats the explanandum). There may well be a correlation between right and ought, but we shouldn’t try to analyze the former by the latter. And there is certainly a use of “ought” that is purely egoistic, as in “I ought to take my umbrella with me today”. Then too there is the legal use of “ought”: “I ought not to drive above the speed limit” etc. So it is not clear that ought is the right concept to use in giving the basis of morality; maybe Schopenhauer is right that it belongs to the old theological conception of ethics. At best it has been extended into morality and thereby acquired a moral connotation, but the root notion is non-moral—like obligation, duty, etc. Our language of morals may (must?) reflect our metaphysics of morals, and that may be tainted (soaked) in theological conceptions of right and wrong. Perhaps we do better to stick with the unadorned “right” and “wrong”, “good” and “bad”, “virtuous” and “evil”.[3] The utilitarian may dispense with the notion of ought in the metaphysics of morals, saying simply that it is right to maximize happiness and wrong to cause needless suffering—what we ought to do is none of his concern. At any rate, a foundational use of “ought” needs to persuade us that we are not buying into thinly disguised theological ideas. Certainly we must reckon with forms of words like, “You ought to do such and such or else”.[4]

            How does deontology look if we take Schopenhauer’s strictures to heart? It is commonly formulated using the notions of duty and obligation, but it need not be so formulated. We can say simply, “Lying is wrong, stealing is wrong, murder is wrong, generosity is good, justice is right, equality is desirable”—we need not resort to the imperative mode of moral expression. We need not speak of “prima facie duties” but of “prima facie rights and wrongs”. So deontological ethics would not perish with Kant’s Judeo-Christian imperatival picture of moral discourse (as in the Decalogue). As Schopenhauer observes, Kant just assumed this apparatus at the beginning without much in the way of motivation or questioning; but we are not compelled to follow him. Perhaps this is one of those cases in which ordinary speech is so saturated with questionable metaphysical theory that what it is natural for us to say is false to the underlying realities. We find it hard to get away from the commander-and-enforcer model of morality given (apparently) rigorous shape by Kant. Morality does not consist of super-imperatives but of indicative statements of rightness and wrongness, good and evil, etc. Plato didn’t have much use for this dictatorial apparatus and Eastern religions are not wedded to it either; it is very much a feature of Western Christianized morality founded in Judaic law. It goes along with personified pictures of the world, and with (idolatrous?) religious art, thus failing to do justice to the impersonal abstract nature of morality. Things are just good or bad; no one has to issue shrill imperatives exhorting us to do this or that or face the consequences (e.g. God’s displeasure). From this point of view, the categorical imperative is irrelevant and nonsensical, certainly not the basis of morality. Morality is inherently non-legislative.

[1] Notice how the phrases “morally permissible”, “morally forbidden”, and “morally compulsory” tacitly introduce the notion of an agent; but morality itself can’t permit or forbid or compel—only agents can do these things. An odd kind of animism pervades such moral language.

[2] We speak of being “morally obliged” and “legally obliged”, but these must be completely different senses of the term “obliged”: the latter implies legal sanction imposed by humanly created laws, but the former implies no such thing. Why do we speak in this misleading way? Perhaps we mean by “morally obliged” something like “will be criticized morally for not doing such and such”, or perhaps we are speaking metaphorically. It seems clear that we are borrowing the legal notion of obligation and applying it to the moral case—quite misleadingly. Morality has no obliging power, unlike the police force and law courts. 

[3] It would be strange (false) to say that God himself has moral duties or obligations or ought to do such and such, yet what he does is clearly right and good. So God doesn’t fit Kant’s theory of the right and good: God isn’t subject to any moral imperative, hypothetical or categorical. When then must we be? Kant’s theologically based theory of morality fails to apply to God!

[4] The prudential “ought” has nothing to do with morality and clearly means “ought given one’s long-term interests”. The same might be said of the putative moral “ought”, which therefore doesn’t really belong with morality. Or are we to say that “ought” is radically ambiguous?

Share

Language and the Cave

 

Language and the Cave

 

In Plato’s cave the inhabitants see nothing but shadows. Shadows are etiolated compared to the objects that cast them. You can glean very little from a shadow about the object that casts it. The shadow is two-dimensional, colorless, massless, and without texture: it is merely an absence of light, conveying little beyond shape and size. If you were confined to knowledge of shadows, you would know next to nothing about the world of objects. Such is the epistemological predicament of Plato’s cave dwellers. The analogy can be used to cast epistemological aspersions on such things as television, the Internet, movies, and even local culture (including art: see Plato). But we can also apply it to language itself: are words like the shadows of objects, wispy simulacra of the real thing? Words, like shadows, contain very little information about the world they are used to talk about: they are just marks or sounds that bear no real resemblance to the objects they are used to refer to. If all you knew about were words and not things, you would have precious little knowledge of the real world of objects and facts. You would have mere shadow knowledge. We can imagine beings in just this position: they live in a world of words cut off from objects (Word and No Object), which they suppose to be all of reality. They know only words but take this to be all there is. Words are bandied around; there are internal relations between words; there is a grammatical structure to language—but there is no known reference relation to a reality external to language. This story might be used to dramatize a certain type of philosophical thesis, namely that we effectively do exist in such a world. Our conception of reality, such as it is, is shot through with language, conditioned by it, limited to it. We might call this “linguistic idealism” or “linguistic determinism”: reality (our reality) is constituted by language and determined by it. The way we see the world is permeated by language, for good or ill. On the one hand, this enables us to deduce conclusions about reality from language; on the other hand, our view of the world inherits whatever defects belong to language. Similarly, we can deduce the shapes of objects from their shadows, but shadows can also mislead us about the nature of reality. The cave of language can be comforting or it can be deforming; in any case it is all we have to go on. For (it may be said) language determines how we think, and hence what we know, and hence (ultimately) how things are. We are familiar with such philosophical theories: post-modern structuralism (Saussure, Derrida), the linguistic turn, some of later Wittgenstein, the Sapir-Whorf hypothesis, etc. The idea is that language forms our intellectual environment, our “frame of reference”, so that all our knowledge is shaped by it; it is a system, a reality, in its own right, with its own structures, rules, and imperatives. All our investigations are really linguistic investigations, because they are inescapably tied to language. So we may as well admit we live in a linguistic cave and make the best of it. We can certainly denounce philosophical (and scientific) viewpoints that fail to recognize these elementary considerations—realism, objectivity, immediate knowledge of things, language-independent truth, etc. Such viewpoints fail to come to terms with the hegemony of language. There is no escape from the linguistic cave.

            Is this position credible? The trouble with it is that it ignores the reality of perception. It treats consciousness as if wholly taken up with awareness of language, but it also enables the perception of objects. This is not a linguistic process. Light gets in from outside; there is a window in the cave. If I want to know about an object, I am not limited to examining its name (its linguistic shadow); I can also look at it—feel it, smell it, taste it, hear it. I don’t have to be in thrall to language, since I can deploy my senses to gain knowledge of reality. If language tries to bewitch me, I can undo the spell by perceiving the object in question—as it might be, a state of mind. Language can’t force me to see things according to its own preconceptions. Language and perception are separate faculties. The point is totally elementary, but totally decisive. We don’t live in a linguistic cave (or cage). We could have, like those purely linguistic beings I mentioned earlier, but that is not in fact our predicament: we have eyes as well as larynxes (or whatever organ is responsible for language ability). And our eyes are not the slaves of our language, contrary to the claims of some theorists. This is surely entirely obvious and scarcely needs to be argued. But it raises a more serious question: to what extent do we live in a linguistic cave a la Plato? Do the shadows of language ever interfere with or limit our ability to know reality? Here we may adopt a more moderate and piecemeal approach: sometimes language can be misleading. Language is an autonomous system with its own rules and “logic”, and it can obscure the reality we take it to represent. It can be logically misleading (quantifiers, definite descriptions, etc.) and it can also be ethically and politically misleading in myriad ways (racial slurs, sexist terminology, speciesist locutions, etc.). We are constantly negotiating and reforming language in the light of non-linguistic knowledge, as well as learning from language about things we already think we understand. I thus suggest a dialectical approach to the relationship between language and reality (or better knowledge of reality): neither has primacy, neither dictates to the other. On the one hand, we (non-linguistically) sense the world and take in its texture and structure; on the other, we describe it a certain way, classifying and articulating it. We are not completely free of language in dealing with reality, but nor are we slaves to language. Language is not a mirror of the world, a transparent flawless medium, but neither is it an all-powerful separate force, pulling us away from reality. There can be tension between language and reality—with thought caught between them–but it is not that one is completely in the driver’s seat. The faculty of language and the sensory faculties are in a dialectical relationship in the creation of human knowledge and understanding. And not only dialectical but also critical: one can correct the other, or improve it. Thus our perception-based knowledge can act critically on our linguistic practices, and our understanding of language (an immensely complex system of knowledge) can contribute to the way we see things (as in conceptual analysis and the “linguistic turn”). In ethics and politics language can be both hindrance and liberation, because linguistic practices can be both hidebound and ameliorative. It is thus wrong to be an absolute linguistic idealist, but also wrong to think that language has no effect on the way we think and feel. As is often the case, the truth lies between two extremes, which means that we have to take things on a case-by-case basis. Boring, perhaps, but whoever said that truth has to be exciting?[1]

 

[1] Iris Murdoch’s Metaphysics as a Guide to Morals (1992) provides some helpful discussion of these issues, particularly as regards the excesses of “structuralism” and kindred doctrines. I would say this is just not good cognitive science. We can accept that classical empiricism (including Kant) underestimated the formative power of language in the creation of thought while insisting that language is not the sole mode of mental representation at our disposal. A dialectical perspective does justice to both sorts of mental faculty.

Share

Death, Disgust, and a Possum

 

Death, Disgust, and a Possum

 

The other day my attention was caught by a bad smell emanating from near the front gate of my yard. Upon closer inspection I discovered a dead animal, evidently a possum. It had clearly been there a few days in hot weather. Flies were buzzing all around it. A regiment of maggots was feeding on it. It smelt appalling. I had to remove it, holding my breath and averting my eyes. It was a paradigm disgust object. It set me wondering again about disgust as it relates to death (it’s dirty work but someone has to do it). This dead animal, we might say, was “alive with death”: it was proclaiming its death, making a spectacle of it, assaulting the senses with the fact of its death. The flies and maggots were literally alive, and they were the palpable proof of death. Often death looks like a mere absence: we speak of a “lifeless corpse” that could just as well be asleep and which might wake up at any moment. It doesn’t look dead—unlike my possum. We might know it is dead, but it might turn out not to be (life is still an epistemic possibility). We could say that the body is passively dead, whereas my dead possum was actively dead—aggressively and vehemently so. And disgust flowed from it. Death was rendered palpable, perceptible, a datum of sense. It was undeniable. Imagine if this was always so: as soon as an animal dies it undergoes a perceptible change, possibly a spectacular change—it suddenly disintegrates or changes color or becomes covered in warts or smells of sulfur. The death would be a perceptible quality not a hypothesis: it would appear real—a presence not an absence. The animal is not just no longer alive but presently dead, floridly so. Wouldn’t this change our attitude towards death? Wouldn’t death, being so out in the open, seem like a solid observable fact—indubitable and unavoidable. When I saw that possum I saw (and smelled) its death: its death flooded into my brain, my consciousness. I knew that death is real—including my own death. Don’t we habitually regard our own death as rather hypothetical, as not quite real, a rumor not a fact? We believe it will happen but we don’t viscerally sense it. But we could have a far more vivid sense of death’s reality—we could encounter it as an inescapable fact. The rotting corpse, alive with death, presents an impression of death. The possum could not just be “playing possum”; it was unambiguously dead. When you smell a rotting corpse you smell death—you don’t just conjecture it. The feeling of disgust brings with it these intimations, intuitions, and impressions. Death impresses itself on you as your senses recoil in disgust—not just as a thought would, but as a felt reality. It passes from the abstract world to the concrete world: the Form of Death (as Plato would say) becomes an empirical datum. We move from the Cave to the Grave, as it were: from a flimsy concept of death, a mere shadow of the real thing, we are suddenly confronted by a vivid display of it. Of course this is disturbing, because now we can’t keep death at arm’s length (I had to physically remove the body with broom and bucket). All our methods of death denial are circumvented and death hits us hard—its active malevolence becomes apparent. The disgust we feel at the rotting corpse is bound up with these psychic realities—our deep lifelong fear of dying, our horrified knowledge of our finiteness. We carry within us a psychological formation dedicated to death (the “death module”), which operates in us all the time (see Heidegger), occasionally emerging when death obtrudes itself; but it is especially active when death becomes part of the perceptible world—as in the spectacle of the rotting corpse (so sad, so revolting). No wonder disgust is such a powerful and disturbing emotion: it taps into our deepest anxieties, terrors, and disquiets. That poor possum reminds us of what we are as mortal biological beings—food for worms, walking death certificates, future nuisances for the living (the body has to be disposed of). I dropped it in the trashcan and went on with my life.[1]

 

[1] Other disgust objects such as disease and excrement partake of the same psychic formation: they elicit the same complex of death-directed attitudes, though less forcefully. We know we need food to live and that without it we would die, but also that all food is killed in the process, leading finally to the excremental corpse. The idea of eating is bound up with the idea of dying. In the case of disease the connection with death is even closer. The rotting corpse is the basic case, however, the shining exemplar. It radiates death, makes an art form of it almost. Notice that zombies occasion disgust in us, because they render death active and alive; by contrast, the recently deceased do not occasion the same reaction, because they are not so deep in death. It all depends on how salient death is, how apparent to the senses.

Share

A Modal Argument from Evil

 

A Modal Argument from Evil

 

Some worlds are more evil than ours: we don’t live in the worst of all possible worlds. Some are a lot worse than ours: only the wicked prosper, disease is rampant and deadly, virtue leads to imprisonment, people are executed for singing in public, deep depression is the norm, and so on. One would think that if such a world had been created by an intelligent being, that being would have to be a devil or else completely incompetent at world creation. Clearly this world was not created by an omniscient, omnipotent, all-good God. The properties of the super-evil world and the properties of the traditional God are not compatible. So God does not exist in every possible world. He is not therefore a necessary being. But God is (or would be) a necessary being. Therefore God does not exist. His non-existence in one world entails his non-existence in every world, including ours. You might not think the degree of evil in our world is sufficient to disprove God’s existence, but surely you would agree that there could be a world so evil as to preclude God’s existence in it. But that undermines God altogether, on pain of declaring him a contingent being. For in some worlds atheism is true, and just where God is most needed. Surely an omnipotent God would make sure that such a world does not exist: but it does. And if God does not exist in a world because that world contains a large amount of evil, how do we know that our actual world is not such a world?[1] It certainly contains a good deal of preventable evil. The mere possibility of a world with no God undermines the whole idea of God’s existence.

 

[1] In some worlds maybe God does plausibly exist—the heavenly worlds. If a possible world is a paradise, we might suppose that it must be created by a God-like being. But our world is not such a world—or else we would already be in heaven.

Share

The Frog Crawl

 

The Frog Crawl

 

The kick in swimming the crawl adds little to forward momentum, maybe ten or fifteen percent of the power. All the power comes from the arms. In the breaststroke the frog-like movement of the legs adds much more to propulsion, about forty percent I would estimate. The arms have less power than in the crawl (question: why?). Reflecting on these facts the other day it struck me that combining the arm movements of the crawl with the leg movements of the breaststroke would maximize power. Yet nobody ever does it, recreationally or competitively. I surmised that the reason is that this combination is physiologically impossible because of the disparate nature of the two sorts of movement. Theoretically the frog crawl would be the best stroke, but physiologically it isn’t feasible. I decided to put it to the test. It was certainly unnatural for the first few minutes, as one would expect, but it was by no means impossible. It was exhausting to be producing that much bodily movement, but it wasn’t something my body and brain couldn’t accomplish. Within ten minutes I was doing the frog crawl quite comfortably—and moving a lot more quickly. Practice confirmed theory. After half an hour trying out different variants of the new stroke it was second nature and it felt strange to go back the old kick. I had discovered a new way to swim! I venture to suggest that this is the natural way to swim “free-style” because your legs want to generate some power and not just trail behind you in a rather pointless shuffling action. You no longer feel that your arms are dragging you through the water with no real help from your legs; instead your legs are producing solid forward motion for your arms to modify and augment. I felt astonished alone there in the pool with no one to tell my discovery to. But why is this not generally known? Is it just the power of custom and habit (Hume)? Surely it has been thought of and rejected for some reason—but what? Later I tried to research the question via Google, but I found nothing to indicate that anyone recognized the existence of the frog crawl—and its superiority. So I am announcing it now, wondering if in fact the idea has already been mooted. It has certainly changed the way I swim from now on. I’m not going back to the old style.

Share