Absolute Deontology

                                               

 

Absolute Deontology

 

 

Kant’s position that there cannot be a case of morally permissible lying has not been met with much enthusiasm. The idea of absolute moral rules thus seems mistaken. W.D. Ross sought to remedy the problem for deontological ethics by qualifying the force of moral rules: instead of saying that we have an absolute duty to tell the truth, he suggested that we have we have what he called a “prima facie duty” to tell the truth—a duty that can be overridden in certain circumstances. This notion has always been obscure and the terminology less than satisfactory, though the problem it is intended to solve is real; also it seems to soften duties in a way that anyone sympathetic to deontological ethics will find unappealing. Do we never have an absolute duty to tell the truth? Are our duties always merely prima facie? I want to suggest an alternative approach to the problem, one that abandons the idea of prima facie duty.

            The alternative view appeals to the notion of different categories of lie. Consider the lying allegation intended to harm the person accused: that kind of lie is surely absolutely wrong. Lying to protect an innocent person is a good counterexample to a perfectly general prohibition on lying, but lying in order to harm someone is nothing like that—it precisely aims at injustice and suffering. I suggest that this duty is absolute: under no circumstances is it right to lie in order to get someone into trouble (and thereby get them into trouble). Call this the lying accusation: then we can say that the duty not to make lying accusations is absolute—not merely prima facie. Kant would be right about the status of this duty. By contrast there is no absolute duty not to lie to children: it depends on the circumstances—and there are plenty of circumstances in which lying is the only way to protect children. Likewise there is no absolute duty to tell people the truth about their physical appearance or level of intelligence. So some categories of truth telling are unqualified duties while some are not; it is not that all types of lying are only prima facie wrong. This is a good result because we don’t want to say that all types of lying are only wrong at first sight—but maybe not at second sight or on deeper reflection. When it comes to the lying accusation we don’t want to tell our children that this is wrong only in certain circumstances or at first glance—it is always wrong, very wrong, necessarily so. If a mother catches her child lying about another child, in order to get that child into trouble, the mother might say, simplifying somewhat: “It’s wrong to lie”. She means to be speaking of the kind of situation at hand, not cases of benevolent lying; and her message is that it is always absolutely wrong to lie in that specific way. She doesn’t follow Ross and cautiously intone: “It is prima facie wrong to lie”. If later the child asks her about benevolent lying, she will be beyond criticism if she replies: “I was talking about lying allegations not all possible forms of lying”. She expected the context to make her meaning clear, and no doubt it was clear; the child is being a philosophical pedant if she responds: “But you said the words ‘lying is wrong’ as if it applied to every possible kind of lie”.

            Semantically, the case is a bit like “camels have four legs”: that is perfectly true (absolutely true) but it isn’t true that every camel has four legs (some camels have lost or leg or two). Likewise, “lying is wrong” is not a universal quantification over every instance of lying; rather, certain kinds of lying are deemed universally wrong (necessarily so). It is intended as a conjunction of certain categories of lie, not every possible type of lie (e.g. lying to protect the innocent). It is wrong to make lying accusations about people and also wrong to lie in order to make yourself look better than you are and also wrong to mislead people for no reason—but not wrong to lie in order to protect an innocent person from falling into evil hands.

            It is the same for prudential duties. We can say, “you should eat in moderation” or “you should take regular exercise””, but we don’t intend to include eating in moderation after starving or exercising when you have the flu. This should not make us regard prudential imperatives as merely prima facie; rather, a more specific imperative would be absolutely binding, e.g. “don’t have a massive lunch every day” or “don’t sit around in the house all day (unless you are not well)”. It is not that prudential duties are merely prima facie duties; they are absolute duties once they are properly formulated. Ditto for duties of etiquette or rules of the road: of course it is sometimes right to violate the usual rules of etiquette or driving (as in cases of emergency), but it is equally true that some of these specific rules are universally binding, e.g. “Don’t make loud noises in church (for no good reason)”, “Don’t go the wrong way up a one-way street (in normal traffic conditions)”. And if we are talking about such rules in particular contexts the point is even clearer: “Don’t bang into that old lady”, “Say thank you to this shop keeper”, “Don’t turn left here”, “Don’t eat another plate of spaghetti”. This absoluteness is quite compatible with there being othercircumstances in which in which one should bang into someone or not say thank you or turn left illegally or have the extra plate. What we should not say is that lying is only prima facie wrong in all circumstances: in some circumstances—indeed in nearly all—lying is absolutely wrong. The concept of prima facie wrongness is not the right way to handle the possibility of exceptions to a universal quantification.

            It is a question whether this approach generalizes to all cases of duties, such as the duty not to steal or murder or break promises. We can all come up with examples in which it would be unduly rigid to insist that one not do anything of these kinds—to save the life of a child, to defend oneself against unwarranted attack, to prevent a catastrophe. But does that mean that these duties are merely prima facie?  No, because individual categories of these duties might be universally binding: one should never steal from the poor to give to the rich or murder an innocent child or selfishly break a promise because something better came up. A Kantian attitude about theseduties is correct, even if we don’t want to say that there are no conceivable circumstances in which stealing, murder, and promise breaking are morally permissible. The simple statement “Stealing, murder, and breaking promises are wrong” is just shorthand for a conjunction of these specific statements; it doesn’t need to be qualified and weakened by the use of the prima facie operator. In fact, moral duties are always absolute, admitting of no exception, once they are properly formulated. So we really don’t need to back off from Kant’s fundamental position—while disagreeing with him about certain kinds of cases.

            Here is a difficult case to end with: it is only possible to save the innocent child from the murderous storm troopers by falsely accusing someone of something and thereby getting them into trouble. You save the child’s life but only by falsely telling the storm troopers that your mother is a thief (thus landing her in jail). In this case I would say you have chosen the lesser of two evils, but the lying accusation was still an evil. But I can imagine someone sticking to the strict Kantian line here, and not unreasonably: you should not accuse your mother of being a thief in these circumstances even if it would save the life of the child. The case is in sharp contrast to merely flattering someone about his physical appearance when he is in fact in horrible physical shape, or not telling a child the awful details of how her mother died in a car accident. In these cases what you did was morally right not merely the lesser of two evils. Lying isn’t always morally wrong even if most kinds of lying are absolutely wrong.

 

Colin McGinn           

 

Share

A Psychology of Philosophy

                                               

 

 

A Psychology of Philosophy

 

 

Most philosophers would agree that philosophy is a very difficult subject, in their heart of hearts if not in their practice. The problems of philosophy are difficult problems. They are not easily solved (sometimes not easily stated). The difficulty might be rated differently by different philosophers—from moderately difficult to extremely difficult to impossibly difficult. In any case there is a widespread perception of pronounced difficulty. I am concerned with the psychology that goes with that perception: what kind of mind is formed by the perception that philosophy is exceptionally difficult? Better: what kind of personality is created by the perception of philosophical difficulty? If a person gives his or her life to philosophy, while fully recognizing its exceptional difficulty, what kind of psychological formation will flow from that? If you devote your life to a subject that admits of relatively easy answers, you will likely experience satisfaction and a sense of achievement (as it might be, the flora and fauna of the Hebrides); but if you devote your life to a set of questions you believe you probably (or certainly) can’t answer, what will this do to you?

            The obvious reply is that you will not meet with success and you will suffer the pangs thereof. You want to know the answer and you strive to discover it, but you accept that you won’t succeed, probably or certainly. Suppose you have been struck with the problem of skepticism and you long to find an answer to the skeptic, but you accept that the problem is extremely difficult and that you have not discovered an answer, and probably never will. You could simply accept this fact, maybe reducing your efforts at solution, given that your chances of success are miniscule; that would be rational enough, considering. But you might also react by overvaluing your less-than-perfect efforts or by downplaying the difficulty of the problem. You might decide you were wrong about the difficulty of refuting the skeptic, or you might congratulate yourself on devising a highly ingenious or insightful response that has escaped the attention of others. This would be understandable, if not completely rational: you have succumbed to a kind of intellectual dishonesty (given that you are tacitly aware that your proposed solution doesn’t really measure up to the severity of the problem). You are engaging in intellectual bad faith. You do this because your earlier response was psychologically uncomfortable: why keep trying to solve what you believe you (probably) can’t solve? Why put so much effort into something pointless? And even if you think you might solve the problem, you still have to wrestle with the strong probability that you won’t solve it—given its extreme difficulty. It is psychologically uncomfortable to strive to do what you think you (probably) can’t do, so it is natural to revise your view of things.  [1]

            What I have just described is a situation in which cognitive dissonance is apt to occur. Cognitive dissonance theory was invented by the psychologist Leon Festinger in the 1950s and has become a standard part of psychology.  [2] The outlines of the theory are as follows. People are equipped with an overall drive towards psychological consistency. This drive is not restricted to belief consistency but applies also to harmony among desires, actions, emotions, and beliefs. If there is a lack of harmony, the subject will experience mental stress or discomfort or anxiety. This will lead to attempts at dissonance reduction by a variety of stratagems in order to restore harmony. The magnitude of the stress is a function of the magnitude of the dissonance. So a person will not be happy to act in ways contrary to his desires, or make statements contrary to his beliefs, or desire what he knows he cannot possess, or feel what he knows he shouldn’t feel, or believe what he has evidence against. Cognitive dissonance leads to dissonance reduction by modifying the dissonant psychological configuration. In Festinger’s original example, a member of a cult who is confronted by evidence undermining the tenets of the cult will be apt to deny the evidence or invent some ad hoc explanation for the evidence that preserves his cherished beliefs. Or a person compelled to work in an occupation that violates her values is likely to abandon or modify those values, or to insist that the occupation really serves to further them despite appearances (as it might be, pollution is actually good in the long term). The central point is that the drive for psychological consistency leads to bad faith of one kind or another. Dissonance is intolerable, so the subject strives to minimize or deny it, often by mental contortions and self-deception. Living with cognitive dissonance is harder than conforming one’s attitudes so as to avoid it. This is the realm of motivated belief and fake emotion and fabricated desire.

            How does this apply to philosophy? Simple: the acknowledged difficulty of philosophy induces cognitive dissonance, which is then massaged in various ways to reduce the mental discomfort. Difficulty produces dissonance because the life of a philosopher is predicated on denying or underestimating it. For the philosopher is investing time and resources in a project she knows is unlikely to bear fruit. She is working on impossible problems—problems she knows (or strongly suspects) she can’t solve. Suppose she is gripped by the problem of skepticism and is working hard to provide a convincing answer to the skeptic—burning the midnight oil, neglecting her family, not having any fun—all the while believing that her efforts have close to zero chance of success (after all, no one else has come up with anything). That is not a tolerable mental state to be in, because of the dissonance between will and belief: she wants and wills what she believes is not within her reach. Various dissonance-reducing reactions are possible: she can give up working on the problem, pretend that the difficulty is not as great as has been supposed, overestimate the value of whatever ideas she can come up with, become a dogmatist. In the extreme she could always declare that philosophical problems are pseudo problems or are meaningless or reflect mental illness. That way she can deflect the dissonance, restore her mental equilibrium, and relieve the stress. She might have started her philosophical life brimming with optimism—she will get to the bottom of these problems where others have failed (through insufficient attention to ordinary language, or by relying on a primitive type of logic, or by not knowing enough science, or because of general sloppiness). Then she felt no dissonance, because her beliefs were consistent with her desires and actions (such as protracted and expensive study of philosophy). But as her philosophical life wears on and the futility of her efforts becomes more apparent, she is likely to arrive at a more pessimistic view of her philosophical prospects—she comes to believe that she will likely not solve the problems that so gripped her (and still do). Then cognitive dissonance is apt to set in: she knows that she can’t achieve what she wants desperately to achieve. She could try to learn to live with this fact, though it would no doubt modify her practical motivations, or she could adopt a variety of dissonance-reducing stratagems. The two most obvious ones are denying the difficulty and overestimating her feeble efforts (“Yes, the problem is devilishly difficult, but my theory finally lays it to rest!”).

I take it this will seem familiar. My suggestion is that cognitive dissonance lies behind some of the characteristic types of philosophical posture. Hence the psychological appeal of logical positivism, ordinary language philosophy, the latest brand of scientism, dedication to formal logic, post-modernism, Wittgensteinian quietism, neurophilosophy, descriptive phenomenology, experimental philosophy, eliminativism, and engaging in purely historical studies. These are all attempts to avoid or downplay the recognition of the difficulty of philosophical problems. Not that they might not have arguments in their favor, but the psychological attraction of such doctrines stems from their promise to free us from cognitive dissonance. It is not that any of them are prima facie all that plausible—they are commonly represented as revisionist—but we feel compelled to accept them because of the stress produced by acknowledging philosophical difficulty. Add to this purely internal source of disharmony the institutional pressures of teaching or studying at a place of higher learning: how can you justify teaching a subject to students that you admit consists of insoluble (or at least unsolved) problems? What truths are the students supposed to learn as a result of such teaching? What results are you conveying to them? How can they justify the expense to their parents? Also: what justifies tenure in a field where no one ever makes any substantial progress? How can you be paid to work on problems no one thinks you can solve? There is a lot of pressure to deny that philosophy is as difficult as it seems to be and has proven to be. We just need that nice fat grant and we will finally answer the skeptic! The latest fad (as it might be, neuroimaging) will resolve those age-old problems, so we need not accept that things are as dire as they appear. We need not accept that we are striving to do what we know we can’t do. That is the fundamental problem, psychologically speaking—the cognitive dissonance at the heart of the philosophical enterprise.

Other subjects do not writhe under this kind of pressure. In them progress is made, large and small, so that the striving to achieve results is gratified, with the hope of further achievements in the future. True, they can contain great difficulties, but history has been kind to them, and there are many areas in which substantial progress has been made. The physicist never feels under pressure to deny that his questions are meaningful, or to resort to ordinary language, or to eliminate what he finds puzzling (“There are no receding galaxies!”). Nor does the biologist have to accept that she is getting nowhere—she has many discoveries to occupy her time. But in philosophy the main questions—the questions that bring us to the subject—remain maddeningly recalcitrant: the mind-body problem, skepticism, the nature of moral truth, free will, the meaning of life, space and time. Not that noprogress is made; rather, the core problems are so difficult that meaningful progress can hardly be expected. Once this fact is acknowledged (assuming it to be a fact) cognitive dissonance is the natural response: if it’s so difficult, why even attempt it? To that question we need a dissonance-reducing answer. Other disciplines are not so afflicted: they are not defined by problems of this magnitude of difficulty. So their practitioners are not subject to the same mental torment as philosophers; they have no dissonance to reduce (maybe psychologists have an inkling of what we go through in some of their more baffled moments).

Imagine a subject of study S that expressly advertises itself as dealing only with the most difficult problems known to man. Some people find themselves going into S. These peculiar people don’t just explain what the problems are and then sit back and marvel at them—they try to solve them. Isn’t S a ripe subject for cognitive dissonance? No one has solved a problem in S for hundreds of years and there is widespread pessimism about solving any of them; yet people persist in trying to solve them and promoting their subject as a worthwhile investment of time and resources. There is thus a lack of harmony here between beliefs about S and life commitment to S—and there is much frustration, self-doubt, neurosis, self-deception, etc. It wouldn’t be surprising if people occasionally sprung up proclaiming that they have discovered a new method for solving their problems (say, studying Sanskrit) or insisting that the problems of S are really pseudo problems—and they would no doubt find their relieved followers. It’s tough devoting your life to problems you don’t think you stand a chance of solving. That way lies acute cognitive dissonance, with its strategies of avoidance. Better not to go into S at all, despite its intrinsic interest; people only go into it because they believe (unrealistically) that they alone can solve the problems of S (ego trumps realism). Doesn’t S sound a lot like philosophy, psychologically speaking? It needn’t bephilosophy, but it would feel like it—it would reproduce philosophy’s characteristic psychological contours. The difficulty of S combined with devoting one’s life to it sets up psychic tensions that lead naturally to certain kinds of reaction—mostly involving bad faith. This is the psychological landscape occupied by the philosophical mind. In particular, we philosophers are always trying to find ways to make philosophy easier than we know it to be.

I have spoken of the individual psychology of the philosopher, which may be taken to be more or less universal given the nature of the subject, but there are also some more local sociological pressures conducive to cognitive dissonance. I mean those pressures (mentioned earlier) stemming from the institutional structure of a typical university and of the profession of philosophy, as it exists today. It is necessary to publish and compete and establish oneself as defending a certain position. You have to show that you are good at philosophy, in the sense of being capable of producing it; and this leads to excessive optimism about what can be achieved in the subject. In particular, you have to show yourself superior to others in solving philosophical problems. Thus you will develop a tendency to overestimate the quality of what you do while underestimating the quality of what other people do. Your views and theories are clearly correct while theirs are clearly incorrect. In teaching the subject you will be tempted to make it seem easier to make progress than it is, so that certain views will be favored as the demonstrably true ones, as opposed to those radically misguided alternatives. This will lead to a culture of exaggeration and overconfidence—a lack of humility in the face of difficulty. How can you stand out professionally if you meekly suggest that it is all very difficult? The cognitive dissonance created by the confrontation between the intrinsic difficulty of philosophy and the institutional structures within which it is practiced will lead to extreme ways of trying to reduce the dissonance—such as declaring your own position definitively correct and everyone else’s hopelessly confused.  [3] Thus it is that factions are formed and feuds triggered. Professionally, you have to have a thesis—a position, a doctrine. But this conflicts with the recognition that it is incredibly hard to come up with anything convincing in philosophy—there are always opponents and objections. So we have cognitive dissonance built into the structures of the institution of professional philosophy, and with it those dubious and dishonest strategies of avoidance—particularly, overestimating one’s own position and underestimating the difficulty of the problem. And isn’t this exactly what we in fact find in professional philosophy—the blowhard and the minimizer, to put it crudely? Also the cult of personality, the formation of “schools”, the withering contempt for those who refuse to see the light, the ever-changing fads and fashions, the dogmatism, the willful blindness, the haggard looks, the neurosis, the swaggering and posing—all attempts to deal with the cognitive dissonance created by philosophical difficulty as it interacts with professional existence. Just consider the familiar figure of the philosopher who thinks (or purports to think) that he has it all figured out: emotivism in ethics, materialism in metaphysics, nominalism in logic, naïve realism in epistemology—everything is bathed in sunlight with not a mystery in sight. This philosopher can see, and will brook, no objection to any of these firmly held views; all alternatives he rejects as absurd and dishonest. Philosophy contains no difficulty for this jolly optimist. Mustn’t we wonder at such a person’s brash confidence? Can he really believe it is all so simple and straightforward? Isn’t his breezy conviction the result of an underlying cognitive dissonance? He knows that things are not really so easy and yet this causes him such acute mental discomfort that he has decided to act as if he has it all figured out.  [4] This is intelligible enough from a psychological point of view, but it amounts to nothing more than a strategy for avoiding cognitive dissonance. At the other extreme someone might feel the difficulty with particular force and decide to give up the study of philosophy altogether. That would also resolve the dissonance and might impress us by its intellectual integrity. But most of us are stuck between these two extremes, suffering the symptoms of cognitive dissonance: it is only partially resolved in us, if at all. We have our cherished theories, so desperately cobbled together, but deep down we realize that they may be wide of the mark or just grotesque errors. To take an example more or less at random: there was a time when people convinced themselves that Davidson’s use of Tarski’s theory of truth supplied all that could be asked of a philosophical theory of meaning; and this position was held with almost religious fervor. It is hard not to see this in hindsight as a kind of bad faith prompted by the felt difficulty of the problem of meaning combined with the need to say something positive about it. The problem isn’t so hard after all–all we need to do is throw some fancy formal logic at it and it will disappear in a flood of biconditionals! Either that or we have to admit that we are trying to solve a problem we haven’t the faintest idea how to solve (or even formulate).

There is a psychology to philosophy, generated by the peculiar character of the subject, and Festinger’s theory of cognitive dissonance seems like a good theory of what that psychology is.  [5] It explains many of the phenomena we observe and fits the way philosophy feels from the inside. It is an empirical psychological theory like any other and should be judged accordingly. It won’t solve any of our philosophical problems, but it might alert us to psychic forces in us that distort our thinking and practice.

 

  [1] It is often noted that you can’t intend to do what you believe it is impossible for you to do, so no one could intend to solve a philosophical problem he believed could not be solved. But that leaves room for intending to do what you think it is quite improbable for you to do. However, that attitude is inherently unstable and disagreeable, especially as the failures and sense of futility mount. At what point do you give up? (Desire, of course, is perfectly possible in the presence of a belief that the desired state of affairs is impossible.)

  [2] Leon Festinger, A Theory of Cognitive Dissonance (Stanford University Press, 1957).

  [3] I have noticed over the years that people always seem to believe what they learned in graduate school, causally dismissing what has happened since. Pure cognitive dissonance.

  [4] It is not always a “he”, but statistically speaking…

  [5] What would a Freudian explanation of the philosopher’s psychology look like? Perhaps this: The difficulty of philosophy is experienced as a form of castration anxiety (of the intellect not the body), which is naturally repressed, and which manifests itself either as a denial of the castrating power of philosophy or as an assertion of the phallic prowess of the philosopher. Thus the philosopher rejects philosophical problems as meaningless or phony (and hence incapable of castrating him) or he elevates himself to superhuman levels of problem solving (phallic invincibility). The anxiety is thus allayed (how this fits the case of women philosophers is left for future research.) Maybe there was a time at which such an explanation would be taken seriously (in fact I invented it in the shower), but I prefer the Festingerian explanation to the Freudian one, having more to do with logic than libido. 

Share

A Problem in Hume

 

 

A Problem in Hume

 

 

 

Early in the Treatise Hume sets out to establish what he calls a “general proposition”, namely: “That all our simple ideas in their first appearance are deriv’d from simple impressions, which are correspondent to them, and which they exactly represent” (Book I, Section I, p.52).  [1] What kind of proposition is this? It is evidently a causal proposition, to the effect that ideas are caused by impressions, and not vice versa: the word “deriv’d” indicates causality. So Hume’s general proposition concerns a type of mental causation linking impressions and ideas; accordingly, it states a psychological causal law. It is not like a mathematical generalization that expresses mere “relations of ideas”, so it is not known a priori. As if to confirm this interpretation of his meaning, Hume goes on to say:  “The constant conjunction of our resembling perceptions [impressions and ideas], is a convincing proof, that the one are the causes of the other; and this priority of the impressions is an equal proof, that our impressions are the causes of our ideas, not our ideas of our impressions” (p. 53). Thus we observe the constant conjunction of impressions and ideas, as well as the temporal priority of impressions over ideas, and we infer that the two are causally connected, with impressions doing the causing. In Hume’s terminology, we believe his general proposition on the basis of “experience”—our experience of constant conjunction.

            But this means that Hume’s own critique of causal belief applies to his guiding principle. In brief: our causal beliefs are not based on insight into the real powers of cause and effect but on mere constant conjunctions that could easily have been otherwise, and which interact with our instincts to produce non-rational beliefs of an inductive nature. It is like our knowledge of the actions of colliding billiard balls: the real powers are hidden and our experience of objects is consistent with anything following anything; we are merely brought by custom and instinct to expect a particular type of effect when we experience a constant conjunction (and not otherwise). Thus induction is not an affair of reason but of our animal nature (animals too form expectations based on nothing more than constant conjunction). Skepticism regarding our inductive inferences is therefore indicated: induction has no rational foundation. For example, prior to our experience of constant conjunction ideas might be the cause of impressions, or ideas might have no cause, or the impression of red might cause the idea of blue, or impressions might cause heart palpitations. We observe no “necessary connexion” between cause and effect and associate the two only by experience of regularity—which might break down at any moment. Impressions have caused ideas so far but we have no reason to suppose that they will continue to do so—any more than we have reason to expect billiard balls to impart motion as they have hitherto. Hume’s general proposition is an inductive generalization and hence falls under his strictures regarding our causal knowledge (so called); in particular, it is believed on instinct not reason.

            Why is this a problem for Hume? Because his own philosophy is based on a principle that he himself is committed to regarding as irrational—mere custom, animal instinct, blind acceptance. He accepts a principle—a crucial principle–that he has no reason to accept. It might be that the idea of necessary connexion, say, is an exception to the generalization Hume has arrived at on the basis of his experience of constant conjunction between impressions and ideas—the equivalent of a black swan. Nothing in our experience can logically rule out such an exception, so we cannot exclude the idea based on anything we have observed. The missing shade of blue might also simply be an instance in which the generalization breaks down. There is no necessity in the general proposition Hume seeks to establish, by his own lights–at any rate, no necessity we can know about. Hume’s philosophy is therefore self-refuting. His fundamental empiricist principle—all ideas are derived from impressions—is unjustifiable given his skepticism about induction. Maybe we can’t help accepting his principle, but that is just a matter of our animal tendencies not a reflection of any foundation in reason. It is just that when we encounter an idea our mind suggests the existence of a corresponding impression because that is what we have experienced so far—we expect to find an impression. But that is not a rational expectation, merely the operation of brute instinct. Hume’s entire philosophy thus rests on a principle that he himself regards as embodying an invalid inference.

            It is remarkable that Hume uses the word “proof” as he does in the passage quoted above: he says there that the constant conjunction of impressions and ideas gives us “convincing proof” that there is a causal relation that can be relied on in new cases. Where else would Hume say that constant conjunction gives us “convincing proof” of a causal generalization? His entire position is that constant conjunction gives us no such “proof” but only inclines us by instinct to have certain psychological expectations. And it is noteworthy that in the Enquiry, the more mature work, he drops all such talk of constant conjunction, causality, and proof in relation to his basic empiricist principle, speaking merely of ideas as “derived” from impressions. But we are still entitled to ask what manner of relation this derivation is, and it is hard to see how it could be anything but causality given Hume’s general outlook. Did he come to see the basic incoherence of his philosophy and seek to paper over the problem? He certainly never directly confronts the question of whether his principle is an inductive causal generalization, and hence is subject to Humean scruples about such generalizations.

            It is clear from the way he writes that Hume does not regard his principle as a fallible inference from constant conjunctions with no force beyond what experience has so far provided. He seems to suppose that it is something like a conceptual or necessary truth: there could not be a simple idea that arose spontaneously without the help of an antecedent sensory impression—as (to use his own example) a blind man necessarily cannot have ideas of color. The trouble is that nothing in his official philosophy allows him to assert such a thing: there are only “relations of ideas” and “matters of fact”, with causal knowledge based on nothing but “experience”. His principle has to be a causal generalization, according to his own standards, and yet to admit that is to undermine its power to do the work Hume requires of it. Why shouldn’t the ideas of space, time, number, body, self, and necessity all be exceptions to a generalization based on a past constant conjunction of impressions and ideas? Sometimes ideas are copies of impressions but sometimes they may not be—there is no a priori necessity about the link. That is precisely what a rationalist like Descartes or Leibniz will insist: there are many simple ideas that don’t stem from impressions; it is simply a bad induction to suppose otherwise.

            According to Hume’s general theory of causation, we import the idea of necessary connexion from somewhere “extraneous and foreign”  [2] to the causal relation itself, i.e. from the mind’s instinctual tendency to project constant conjunctions. This point should apply as much to his general proposition about ideas and impressions as to any other causal statement: but then his philosophy rests upon the same fallacy–he has attributed to his principle a necessity that arises from within his own mind. He should regard the principle as recording nothing more than a constant conjunction that he has so far observed, so that his philosophy might collapse at any time. Maybe tomorrow ideas will not be caused by impressions but arise in the mind ab initio. Nowhere does Hume ever confront such a possibility, but it is what his general position commits him to.

 

Col

  [1] David Hume, A Treatise of Human Nature (Penguin Books, 1969; originally published 1739).

  [2] The phrase is from Section VII, [26], p. 56 of An Enquiry Concerning Human Understanding (Oxford University Press, 2007).

Share

A Plurality of Selves

                                   

 

 

A Plurality of Selves

 

 

  1. Human beings are persons or selves and they have a specific nature: they have a certain type of psychology and a certain type of biological make-up. Not all possible sentient beings share this nature. For instance, humans have personal memories, consciousness, self-reflection, rationality, and a brain with two hemispheres that is in principle detachable from the body. Arguments about personal identity take these facts for granted and contrive various thought experiments on their basis: transferred brains, divided brains, memory loss, memory upload, personality alteration, and so on. Thus we arrive at theories of personal identity for humans. One well-known argument proceeds from the possibility of brain splits to conclude that personal survival does not logically require personal identity.  [1] But what about other possible types of being that don’t share our human nature? Can’t they be persons or selves too? If so, we can’t expect to derive general “criteria of personal identity” just by considering the human case: we need to look at the full range of possible cases if our theory is to have the generality we seek (and it might turn out not to have that generality).

            Consider sentient beings that don’t have brains that can be divided or transferred: the brains of these beings don’t have two equipotential hemispheres and they are distributed throughout the organism’s body (rather like an octopus). There are thus no possible scenarios in which their brain is divided and the hemispheres placed in separate bodies, so there is no way that they can survive without being identical to some future being (at least so far as the standard fission arguments are concerned). We can’t consult our intuitions about what we would say under conditions of brain bisection and relocation, since these are not possible (such surgeries would result in certain death). For these beings there would be no survival under the imagined conditions. In fact, a theory that ties personal identity to the body would be more plausible for them than in the human case: having that brain in that body would be tightly correlated to future survival. There would be no pressure to accept psychological continuity theories if it were not possible to dissociate survival from bodily identity, as in the standard thought experiments. Lesson: be careful not to accept a general theory of personal identity based on the contingent peculiarities of the human organism. That might lead to chauvinism about personal identity, i.e. ruling out bona fide persons as not really so.

            Now consider this hypothetical case: sentient beings without personal memories. We can allow that these beings possess general factual memory; what they lack is memory of their past experiences and deeds. Their earlier life is a complete blank to them, though they live and love. They clearly persist through time, but this persistence cannot be a matter of remembering different periods of their existence: they don’t persist through time because of the power of memory. So for these beings personal identity cannot consist in memory links to earlier selves, however it may be for us. We can’t say that A is identical to B because A can experientially remember what B did. These beings may have the anatomy described in the previous case, so their identity is better explained in terms of bodily continuity, not in terms of memory links. Not that bodily continuity will work for everyone: for some possible beings the body changes over time to become a different body, as with bodily metamorphosis (including the brain). Butterfly persons would persist through time while they acquired a brand new body at puberty. So it would be wrong to generalize from the no-memory type of person to all possible types, as it would be wrong to generalize from the human case to all possible cases. The butterfly adults might have vivid memories of their pupa childhood while not sharing their body with that being; in their case a memory criterion might well seem attractive. It all depends on the being.

            Here is an even more radical case to consider: the no-consciousness self. It might be claimed that personal identity consists in the persistence of a subject of consciousness over time: and certainly for conscious beings that theory has some appeal (though it doesn’t seem very explanatory). But consider a hypothetical case in which a conscious being loses consciousness during the course of life yet retains an unconscious mind; or a species that was once conscious but now, through natural selection, has abandoned that trait and survives by means of unconscious psychological mechanisms. Such beings might have memories, beliefs, desires, personalities, and so on—they just aren’t conscious. They are, if you like, zombie selves (though with an elaborate unconscious psyche). They would look and sound like conscious persons, living their lives like such persons to outside observation (maybe a bit wooden in certain respects). So they exist through time and possess the usual attributes of persons (except one)—picture them on a remote planet with a functioning civilization. For these beings a theory based on continuity of a conscious subject would be wide of the mark—more like continuity of an unconscious subject.  [2] They have a psychology and they exist through time, but there is no consciousness in there: “unconscious self” is not an oxymoron.

            We can thus refute bodily theories, memory theories, and consciousness theories as general theories of personal identity by considering the full range of possible persons.  [3] Maybe there is no general theory available just different theories for different types of being, or maybe some other theory can be contrived; what is clear, however, is that the human case is a special case, not characteristic of all possible cases. Methodologically, then, it is unwise to proceed from this case alone; that will only lead to parochialism and special pleading. We have personal memories, consciousness, and divisible transferable brains, but that is not true of all possible selves, and perhaps not of all actual ones (animals, aliens). The case is unlike theories of persistence for material objects in that the material objects around us are characteristic of material objects in general: if material identity is explicable in terms of spatiotemporal continuity or some such for the objects on earth, then it will be explicable in this way for objects elsewhere, actual and possible. There are no non-spatiotemporal objects to deal with and accommodate. Similarly for set identity: the criterion in terms of identity of membership generalizes to all possible sets—it isn’t limited to the sets we encounter every day. The thing about selves is that they can be “multiply realized” both physically and psychologically, so we don’t want to tie the concept down to a specific type of self—as it might be, adult humans with consciousness, memory, and a divisible anatomy. That would be like defining set identity purely in terms of sets of elephants or ants. The plurality of possible selves imposes a constraint on theories of personal identity, and one that is not easy to meet.

 

  1. Let me now turn to a different question involving plural selves, namely whether I could have been a different self: that is, is there a plurality of selves in metaphysically possible worlds that could be said to be possible selves of mine? The question is tricky because I clearly could not have been a different human being: I am necessarily Colin McGinn, given that a certain human being is denoted at both places. Any human being in a possible world that is not identical to this human being is not me. Someone could look and sound like me, but if they are not the same human being they are not me. No member of an animal species can ever be identical to a different member of that species. But it doesn’t follow that I could not have been a different person (associated with the same human being). In fact, this is quite easy to imagine: we just have to suppose that I undergo very different experiences in some possible world. Suppose my experiences in world w involve being born into poverty in a war-stricken land where abuse is rampant and education non-existent: I suffer various life-altering traumas and end up with emotional problems radically unlike those I now have. My personality, my memories, and my abilities are totally different in w: am I not then a different person from what I am today? The person you become is a function of your life experiences, among other things, but these are contingent, so you could have become a different person. You could even be subjected to chemical attacks that rewire your nervous system, or suffer genetic alteration in the womb. It would be the same organism, but it wouldn’t be the same person, because psychology counts in the latter respect. If we call that person you could have become “Albert”, then we can say that you might have been Albert, in the sense that the human being you are could have been associated with (“housed”) another person, namely Albert. You quaperson could not have been identical to Albert, but your organism could have been his residence instead of yours. Thus we derive the paradoxical-seeming proposition, “I could have been a different person”, which translates roughly as, “My organism could have housed a different person”. The word “I” can slip from referring to a human being to referring to the person housed by that human being, but there is a clear sense in which it is true to say, “I might not have been me”: that is, “This human being might have housed someone other than my actual self” expresses a truth. Indeed, I might have been any number of people in this sense, given the plurality of possible lives I (sic) might have led. What my name actually stands for is an interesting semantic question: is it a human being or a person (self)? It seems ambiguous between the two in actual use, which is why I can say, “Colin McGinn might not have been Colin McGinn” without sensing contradiction, where the first occurrence of the name refers to a certain human being and the second refers to the person currently occupying that human being. I am necessarily the person I am, and I am necessarily the human being I am, but that person is not necessarily identical to that human being—in fact, they are not identical at all. In one sense “I am not a (particular) human being” is true, and in another sense “I am not a (particular) person” is true; but it is equally true that I am a human being and also a person! The word “I” is flexible enough to allow for all these statements to be true under the right interpretation.

            We can say, then, that across modal space I have many counterpart selves that could each have occupied this particular organism. I have no such human being counterparts—in this respect I am a unity. But I am (associated with) a plurality of selves in the sense that possible worlds contain many such selves corresponding to me. This is not a denial that proper names are rigid designators, since each of these entities has its own name: it isn’t that my counterpart selves are all designated by “Colin McGinn”, construed as a name of a particular person in the actual world. It is quite true that Colin McGinn is necessarily Colin McGinn (under the right interpretation), even though I might have had many numerically distinct counterparts that inhabit my actual body (and were all called “Colin McGinn”). This can be verbally confusing, but the underlying logic and metaphysics are not: one human being, many selves, with names for each of these separate entities. This enables us to say such potentially confusing things as, “Colin McGinn (human being) might not have been (associated with) Colin McGinn (person)”. The name seems capable of referring to both.

 

  1. I now take up another issue in which the notion of a plurality of selves suggests itself, namely whether we actually contain more than one self. It is commonly assumed that we contain at most one self, though there have been dissenters to that conservative opinion (as we will see). Hume argued that we contain zero selves, having conducted an internal survey; but most people put the number at unity after no survey at all. It is an interesting question why we do this so readily: has anyone ever actually counted the number of selves he or she contains? Is it that you can tell just by looking that you contain a single self, as you can tell by looking that you have a single body? But you can’t look at yourself and then proceed to count the number of selves in the vicinity. Is it that the ordinary use of “I” suggests unity? But that seems a flimsy way to get at the cardinality. Is it perhaps just a lazy prejudice like assuming there is only one type of person in the world? At any rate, it is apparently a general belief on the part of (human) selves that there is only one of them per organism. If we ask for a demonstration, we are apt to be dismissed as blind to the obvious. Is this just how we appear to ourselves? Maybe, maybe not, but maybe the appearances are misleading: we need a reason to accept that we really are thus unitary. At least we should be open to evidence that such unity is illusory. People used to think there was only one sun in the universe, but more careful investigation revealed a plurality of suns; might the same thing be true of the self in our own personal universe?

            Let me list some putative reasons for dissent from the common assumption: the Freudian division into ego, id, and superego; the phenomenon of multiple personality; brain bisection experiments; modular conceptions of mind; the theatrical conception of the self; division into private and public self; a general sense of self splintering (R.D. Laing, The Divided Self). I don’t propose to discuss each of these in detail; I am more interested in the general idea of multiples selves. I certainly think it is logically possible for a single organism to house more than one psychic entity deserving the name of self; and I think there is good empirical evidence that this is normal for ordinary adult humans. I am with Erving Goffman (and William Shakespeare) in believing that a given individual presents a number of distinct selves in different social contexts, and that these are deeply entrenched. The person is something dramatically constructed—and we can construct a plurality of these things. I myself have always felt that I am made up of three distinct selves—an intellectual self, an athletic self, and a musical self—with little overlap between them; and I fancy I am not alone in having this kind of impression. Is my impression to be disputed? I also wrote a novel, The Space Trap, in which I played with the idea of a phobic self and an imaginary self in addition to the self we ordinarily recognize. Such ideas are quite common in writers trying to represent the complexity of human psychological reality. People feel they are not the simple unity that we tend to speak of; there are significant divisions and separations (hence the famous Walt Whitman remark, “I contain multitudes”). Just as people feel themselves to change dramatically over time, becoming “a different person”, so they feel that at a given time there is a plurality lurking inside. Pathological conditions like schizoid personality or multiple personality are not so far from the norm, maybe just extreme cases of it. If someone sincerely believes himself to have a divided self, what evidence can be used to refute him? What kind of counting procedure would undermine such a claim? Might there not be degrees of division with the normal case of personal separation just at the far end? Whence the dogmatic conviction that there must be only one self each? We have got used to the idea that we possess more than one mind, what with the unconscious and generalized modularity, so why should the self be treated as uniquely unitary? If I contain many minds, don’t I thereby contain many selves? If Freud were right about the unconscious, surely he would have discovered another self in us in addition to the conscious self—an autonomous agent with its own agenda. True, the conscious self that is encountered in introspection has a certain salience, but why should that determine the full extent of our selfhood? And that self might divide into a number of sub-selves upon closer examination. We are often torn, internally conflicted, and doesn’t that suggest a separation of selves? No one ever told the genes, or our life experiences, that they were to construct only a single self, so the possibility is open that they construct a plurality of selves uneasily (or easily) conjoined. We are more like a constellation of selves than a single unified self, a galaxy not a solitary star.

            If this is so, then our identity through time consists of the persistence of many selves, not one.  There is not a single self that exists from one moment to the next but a plurality of selves. Some of these selves may perish while others march on; all may perish at some point to be replaced by new ones. What we call our personal identity, and picture as a single persistent capsule, is really a mixture of separate elements held tenuously together: an identity of selves in the plural not the singular. Conceivably, these selves might have different conditions of identity: for example, there may be a biological self fixed by the genes that is tied to the constitution of the organism, existing alongside a number theatrical selves freely constructed to serve suitable social purposes and revocable at will. A theatrical self may disappear at a certain time when the context no longer demands it, while the biological self goes on regardless. Once we accept a plurality of selves we have the possibility of separate existence through time. It is really too simple to speak of “personal identity” as if we had a single well-defined thing called a “person” whose identity is at issue; the human psyche is too complex for that. Surely we can imagine a being that regards himself as such a plurality and speaks spontaneously of one of his selves going out of existence while others continue. If we insist on his answering the question whether he survived such and such an event, he might give us a puzzled look and reply, “Well, this self and that self survived, though that other one didn’t”. For this being it would be wrong by stipulation to speak only of a single self that survives or fails to. To what extent we approximate to his condition is an empirical question, and one that has a good deal of evidence in its favor.

            I believe it is true to say that we experience our body as more of a unity than it really is. It comes as a surprise to discover all those separate organs each doing its specific job—and illness can deliver a jolt to our assumption of unity. If we ask after its persistence conditions, we quickly come to see that many organs are involved, and some can survive what others may not. If we insist on asking whether the body survives such and such an event, we can see that this question is too simple, given the complexity of the body (the plurality of its organs). The person is a bit like that: in principle some parts may survive while others perish (consider Alzheimer’s). I can lose one hand while retaining the other because I have two hands; why couldn’t I have more than one self where each can survive separately? If the brain realizes human personality in more than one location, then damage to one location may destroy one instance but leave another: wouldn’t this be the loss of one self and the retention of the other? Here we would have different tokens of the same type, or similar types, but different types may also coexist with one another. It may be convenient to talk as if we are a single entity, as it is convenient to talk of the body as a single entity, but both are made up of other units. What we call the self is really a plurality of distinct self-like entities.  [4]

            There is a plurality of types of self; there is a plurality of possible selves corresponding to each human (and animal) individual; and there is a plurality of actual selves within each individual. There is not just the human type of self; there is not just a single possible self for each individual; and there is not just a single actual self for each individual.      

              

Colin McGinn

           

  [1] Derek Parfit, Reasons and Persons.

  [2] In considering these beings it might help to adopt a higher-order thought theory of consciousness.

  [3] I haven’t considered so-called psychological continuity theories in relation to hypothetical persons. This is because I am not convinced such theories have ever been properly formulated, and because they seem open to obvious counterexamples concerning sufficiency (continuity is a “cheap relation”). And couldn’t there be beings that revel in their psychological discontinuities, changing their beliefs and desires dramatically from day to day? They might regard this flexibility as essential to their identity.

  [4] Of course, the parts of the body are not themselves bodies but organs of the body, and parts of the self may also not themselves be selves; but there is reason to accept that some parts of what we call our self are also self-like. If they existed alone, we would still call them selves.

Share

A Plea for Persuasion

                                               

 

A Plea for Persuasion

 

 

Jane Austen’s sixth and final novel is entitled Persuasion. There is a reason it is so entitled—it deals with the role of persuasion in human life (as exemplified in Anne Eliot being persuaded against her better judgment not to marry Captain Wentworth). But we might see the whole sequence of her novels as occupied with the topic of persuasion in one way or another. In any case, she clearly believes that persuasion is central to human life, for good or ill. It is not hard to see why: persuasion is heavily implicated in personal relations (courtship, seduction), in politics and diplomacy, in business and finance, in law, in science, in philosophy, in scholarly discourse generally, and in any form of leadership. To the most persuasive go the spoils, we might say. Accordingly, psychology has studied the workings of persuasion, exploring the principles whereby persuasion operates (the role of authority, conformity, reciprocity, commitment, liking, etc.).  [1] But philosophy has not been much concerned with the topic: the philosophy of language has little to say about it, and epistemology has not found a place for persuasion as a source of both knowledge and error. Plato was certainly interested in it because of its place in the armory of the sophists (there is good persuasion and bad persuasion), but recent philosophy has been silent on the subject. Here I will make some remarks intended to bring persuasion into the conversation. Given its centrality to human life, it might be useful to get a bit clearer about it.

            Consider speech act theory. We are told that there are several kinds of speech act, each irreducible to the others—assertion, command, question, performative, etc. Wittgenstein took this plurality to the extreme, contending that there are “countless” ways of using language with nothing significant in common. The idea that persuasion might be the common thread has not been mooted. But note that, while one can only assert that and order to, one can both persuade that and persuade to. That is, persuasion can aim at both belief and action, while assertion aims only at belief and command only at action. The OED has two definitions for “persuade”: “cause to do something through reasoning or argument” and “cause to believe something”. So persuasion is a genus with two species, corresponding to assertion and command—inducing the other to believe or to act. Whether these can be unified is an interesting question: might belief formation be a type of action, or action a result of a specific type of belief (say, the belief that this action is best all things considered)?  [2] Maybe all persuasion is persuasion-that, with practical belief the kind aimed at by command. In any case, persuasion covers both types of speech act; so we need not accept irreducible plurality. Questioning might then take its place as persuading the other to provide information (a special case of command perhaps)—“I wonder whether you would be so kind as to tell me the time”. This seems like an attractive all-encompassing conception: speech as persuasion. If it is objected that not all talk is talking-into (or out of), because speakers are not always offering arguments, we can reply that persuasion need not always be explicit—there is also implicit persuasion. All speech acts are implicitly (or explicitly) argument-like because they offer reasons for the hearer to respond in a certain way: assertion involves inviting the hearer to reason from the speaker’s making an utterance to the likelihood of its being true, and command involves getting the hearer to recognize that the speaker is in a position to enforce what he commands (or would be displeased if ignored).  [3] The hearer is always reasoning from premises about what the speaker has said and responding accordingly. So even a simple speech act is tacitly argument-like: if I just shout, “Help!” I am trying to persuade you to come to my assistance by reasoning about why I would make that noise. In a benign sense, I am manipulating you—trying to get you to do (or think) what I want. Even when a cat meows to go out she is trying to persuade you to open the door for her. We have a strong interest in getting people to act so as to promote our desires; speech is a way of making this happen, and so persuasion is central to it. In talking we are always talking people into believing and doing (compare: all seeing is seeing-as). Thus persuasion is the general type of all speech acts.  [4]

            Conceptually, persuasion is necessarily intentional: when we persuade we do so intentionally. This means that we can never try to persuade someone of what we know he will not do or believe: we don’t set out to persuade the unpersuadable. You may try to entertain or embarrass someone by talking to him even if you know he won’t be convinced, but you won’t be trying to persuade him of what you are saying. You only try to persuade people you regard as (minimally) rational. So the practice of persuasion presupposes an assumption of rationality; it takes place against a background of respect for the other as a rational agent. When this is lacking persuasion might be replaced by brute force—making the other to do what you want him to (rightly or wrongly). Thus you don’t try to persuade toddlers to do what you want them to; you simply impose your will on them. Persuasion occurs within what Kant would call the kingdom of ends—respect for others as autonomous rational agents. Crucially, persuasion calls upon consent (unlike the brute exercise of power): you are trying to get someone to agree with what you are saying. And they may not: they may reject your arguments, refusing to shape their beliefs or actions as you suggest. The consent may be of many kinds, from sexual to political, scientific to economic. Advertising is trying to persuade people to buy things, but people may not consent to spending their money as you wish them to. It takes two to persuade successfully: the would-be persuader and the targeted consenter. The persuader is trying to secure the free assent of the consenter. There are many possible ways of doing this, ranging from outright psychological manipulation to the purest rational argument; but there is no skipping the obligation to secure assent if persuasion is the name of the game. Thus persuasion is always preferable to coercion and should not be regarded as a special case of coercion. Never coerce where you can persuade.

            Persuasion may be a step up from coercion, but it is still inherently problematic: this is what so exercised Plato, as well as Jane Austen. For any good act of persuasion there are many bad acts. There is education, but also propaganda; there is logical reasoning, but also bullshit and manipulation. Moreover, it is not always easy to tell one from the other (they don’t come in different color ink). The credulity of human beings is as obvious as their educability. People can be persuaded of the most arrant nonsense if it suits them to be so persuaded.  The con man can be as convincing as the wisest sage. The trick is to be persuadable just when one ought to be persuadable, but that is no easy task. Memes and fakery lurk around every corner. The Internet is a cesspool of toxic phony persuasion. It’s enough to make you want to give up on persuading anyone of anything—abandoning the very idea of persuasion! But no, we must persist in sorting out the wheat from the chaff. I am laboring the obvious, but we must always be aware of the potential evil inherent in persuasion, always on the lookout for its pernicious forms and manifestations. Just think of human history without pernicious persuasion!

            Logically, persuasion is a four-place relation: x persuades y of p by means of m. We can allow for reflexive persuasion, as when you persuade yourself of something, but persuasion is always directed at some object. The value of p might be a proposition or an action, depending on whether the speech act is assertive or imperative. There must always be a means m that may vary while keeping p constant: you may try different m to secure the same p. This too is essential to persuasion: it is not like logical proof, but a matter of individual psychology (Euclid was more of a deducer than a persuader). Persuading is like teaching someone to dance: there are many ways to do it so long as you get them dancing (but please, no coercion!). What we must not do is persuade by lying (except in very special cases): in the general case, the recipient assumes that the means you are employing does not involve outright falsehood—that is part of the pact of persuasion. I am prepared to be persuaded by you, but only if you tell me the truth. Truthfulness and persuasion go together. So persuasion is quite a complex operation, not one available to organisms generally. Add this to the condition that persuasion is always intentional and we get the result that an agent can persuade only if she possesses reflective knowledge of the means and ends integral to a given persuasive act (this includes the meowing cat). And you can only be good at instantiating this relation if you are skilled in the arts of persuasion; indeed, you do well to learn those arts as you would any complex skill. You should take Persuasion 101 and possibly get an advanced degree in it (if you want to be a diplomat, say). Practice your persuasive skills daily (the good kind, of course).

            We should not neglect the use of the concept of persuasion in “I am persuaded that p”: what kind of state of mind does this describe? This state could come about otherwise than by some other person persuading you; it could issue from the facts themselves (and we do sometimes speak of facts as persuasive, usually in relation to a theory). This locution appears to suggest something stronger, more potent, than mere belief: I don’t just believe that p; I’m prepared to act on it. Thus it edges towards the conative—it is motivational. If someone announces that she is persuaded that eating meat is wrong, we expect abstinence from meat eating (note the conceptual connection between persuasion and doing). Thus the concept appears to straddle belief and desire, i.e. it suggests motivating beliefs. This seems like the right notion to employ in moral psychology: we don’t merely believe certain moral principles; we find them persuasive. So the concept of persuasion seems to have a role in moral motivation: to be persuaded that eating meat is wrong you have to take yourself to have very good reasons for not eating meat. You don’t just think it’s wrong; you are persuaded it’s wrong. That is your persuasion, your conviction, and your commitment. When Anne Eliot was persuaded not to marry Captain Wentworth she acted on it; it wasn’t just a state of her cognitive apparatus. Jane Austen’s novel has a title that denotes both a verbal act and a state of mind: the act of persuading and the state of being of a certain persuasion. We could say that though Miss Eliot was persuaded at age 19 not to marry Captain Wentworth, it was never her persuasion that she should not marry him—which is why she did marry him seven years later. It was not her deep conviction and she regretted her earlier decision. You can be persuaded to do something without it being your persuasion.

            A cluster of concepts has captured the attention of philosophers: knowledge, belief, certainty, intention, assertion, reason, justification, testimony, and argument. I suggest we add the concept of persuasion to this list.  [5]

 

  [1] A classic text is Robert Cialdini, Influence: The Psychology of Persuasion (1984).

  [2] I discuss this in “Actions and Reasons” in Philosophical Provocations (2017).

  [3] I discuss this view in “Meaning and Argument” in Philosophical Provocations.

  [4] We have become accustomed to speaking of speech as communication, but that term is loaded, connoting the transfer of something from speaker to hearer (the OED has “share or exchange information or ideas”). But the persuasion conception suggests rather the idea of influence: speaking is causing the hearer to react in a certain way, not giving him something. The speaker is exercising a certain power over the hearer not conveying something precious.

  [5] Here is an interesting question for the new field of persuasion studies: how do performatives persuade? Not by stating language-independent facts but by the very issuing of them. I am persuaded that you have promised to meet me precisely because I just heard you utter the sentence “I promise to meet you”. This type of persuasion is very effective because the speaker doesn’t need to rely on the cooperation of outside facts—the speech act alone suffices to make it so. Hence performatives are uniquely persuasive (possible paper title, “Persuasive Performatives”).

Share

A New Theory of Color

                                   

 

 

A New Theory of Color

 

 

I will first state the theory as simply and clearly as possible, and then I will consider what may be said in its favor. I call the theory “Double Object Dispositional Primitivism” (DODP) or just “the double object theory”.  [1] Its tenets are as follows: When you see an ordinary object the color it is seen to have is a simple monadic property of the object’s surface (not a disposition). This property is generated from within the mind and is projected onto the seen object. The mind has a disposition (power) to project color qualities onto seen objects, and it is a necessary and sufficient condition for being (say) red that the object should elicit this disposition. The object is red in virtue of the fact that it triggers the mind’s disposition to project redness onto objects. Another type of mind might have a disposition to project blueness onto the same objects (say, Martians) and then the object would be blue for them. Color is relative. In addition to this object there is another object (intuitively, an object of physics) that is not itself red but which has a disposition to interact with the first disposition to give rise to experiences of red. This disposition is by no means identical to redness, but it is closely related to it: the object that has it triggers perceptions of the primitive property of redness. The physical object interacts with a mind to make that mind activate its disposition to see things as red; it is “red-inducing”. The object induces perceptions of red (in certain perceivers) in which the primitive property of redness is projected onto an object that is not identical with the inducing object. So there are two connected ontological levels: a perceptual object that has the primitive property of being red, and a physical object that lacks that property but which has a disposition to cause perceptions of red (in conjunction with the mind’s disposition to see certain objects as red). It is strictly false to describe this second object as red, though it is natural to do so given its actual role in producing experiences of red. In short: perceptual objects are primitively red while physical objects are dispositionally red (i.e. not really red). We see the primitive property of redness, but we never see the dispositional property associated with it. These two properties are possessed by two distinct objects (think the manifest image and the scientific image).

            The mind has a disposition to see certain objects as red and this disposition can be triggered by physical objects. When this happens an object is seen as red, but that object is not the triggering object. The mental disposition can in principle be triggered in other ways too, as when a brain in a vat sees things as red because of stimulation of the visual centers. Here no physical object operates to trigger the disposition (i.e. an object in the perceiver’s environment acting on the eyes) and yet an object is seen. Hallucinated objects can be red too. This is hard to account for under the classical dispositional theory, since that theory supposes that only (existing) physical objects with certain dispositions can be red. But to be red is not just to be a physical object that is disposed to produce red experiences by normal perception, because hallucinated objects can also be red. The important factor is the mind’s disposition to generate such experiences, not the de facto dispositions of the physical world. Generally the mental disposition is triggered by the usual environmental objects, but it can also be triggered in other ways, as with the brain in a vat scenario. Suppose I say, “That tomato is red”: this is true if I am referring to a perceptual object of a certain kind, whether real or hallucinated, but not if I am referring to the physical object associated with that object. The objects of physics have no color (they lack primitive color properties), though they do have dispositions to produce color experiences (in conjunction with suitable minds). Those objects had no color before perceivers came along and they have none now, though they do now possess a disposition they lacked earlier. They do not have secondary as well as primary qualities (or else physics would be required to mention them). The objects that have colors are different objects—perceptual objects. And colored objects can be perceived even by a brain in a vat. To use a familiar terminology, phenomenal objects are red but noumenal objects are not, despite being closely tied to phenomenal objects.  [2]

            What nice things can be said about the double object theory? First, we do justice to the phenomenal primitiveness of colors, their manifest simplicity, and to the fact that we can see them (as we could not if colors were identical to dispositions).  Second, we acknowledge the role of mental dispositions in grounding attributions of color, as well as the role of external objects in eliciting perceptions of color. We are not completely wide of the mark in calling physical objects colored—though it is a question how often we really do this, given that we are normally talking about perceptual objects not the objects of physics. The relationship between our ordinary ontology of tables, tomatoes, and tulips, on the one hand, and the objects described in physics, on the other, is obscure; and it is by no means obvious that we speak of the latter when referring to the former. In any case, according to DODP there are two objects at play here, one of which is red, and the other of which is not (though it has a disposition to cause experiences of objects of the first kind to look red). It is not one and the same entity that is both red and disposed to look red. Certainly, the color red is not the categorical basis of the disposition to appear red—that will be a matter of the physical properties of the object belonging to physics. There are three levels at work here: the physical properties of the physical object that ground its disposition to give rise to appearances; the disposition itself; and the primitive property that perceptual objects possess (and appear to possess). Only the last of these is perceptible. The crucial component is the disposition of the mind to see things a certain way: once that disposition is activated color comes into the world, projected by the mind.

            What might be said against the theory? Perhaps some will find the doubling of objects objectionable—they will prefer to attribute color to the objects of physics. In fact the spirit of the theory would be largely preserved by this move, with the primitive color properties instantiated in the same object as the disposition to cause color experiences—we can still keep these properties distinct, as well as invoke projection to explain the presence of the primitive property in the object. I have formulated the theory in the double-object way because I favor this position on independent grounds (having to do with hallucinations, intentional objects, and brains in vats). I also think it undesirable to locate colors in the world studied by physics, since physics makes no mention of these properties (their relativity to perceivers disqualifies them to begin with). I think of perceptual objects as an ontological layer over and above the objects described by physics (compare Eddington on the “two tables”). Artifacts and organisms, in particular, should not be seen as individuated and constituted by the categories proper to physics, but as a distinct ontological layer (though no doubt dependent on the physical level in some way). Color properties attach to this common sense level not to the rarefied level occupied by physics (the “absolute conception”). Still, it would be possible to apply the apparatus of DODP to a single-level ontology, the essential idea being that colors are primitive properties bestowed on the world by dispositions of the mind (coupled with the action of physical objects).

            That idea might itself provoke further dissent: for how can the mind generate these properties from within its own resources? Isn’t this mysterious and magical-seeming? I totally agree: how the mind (brain) manufactures color properties is indeed mysterious, like many things about the mind. But this is not a fatal objection to the theory, simply a fact about the mind that needs to be acknowledged, i.e. that there is much about it that we cannot explain. Other theories avoid such mysteries by advocating reductive accounts of color—as that colors are reducible to electromagnetic wavelengths or that they are logical constructions from subjective qualia conceived as inner sensations. But these attempts at reduction are implausible (for reasons I won’t go into), the primitive property theory emerging as superior—though it does indeed lead to problems of intelligibility. Where do these remarkable properties come from? Does the mind create them itself or find them elsewhere (in a Platonic world of color universals, say)? How exactly are they “projected” onto objects? The theory raises plenty of puzzles, to be sure, but it might yet be true, since the truth is sometimes mysterious.  [3] What the theory does is arrange the facts into an intelligible structure, aiming to respect phenomenology and logical coherence. Instead of working just with physical objects and their dispositions, it invokes an extra layer of non-dispositional properties and places them within a mind possessing certain projective dispositions. Perceptual objects thus have exactly the properties they appear to have, while we avoid treating colors as mind-independent. Colors have no place in physics, but they are front and center in our ordinary experience of things, just as they seem to be.

 

Colin McGinn  

  [1] I first wrote about color in The Subjective View (1983), then in “Another Look at Color” (1996), and now in this paper (2018). At each point I have modified the position that came before, while retaining the basic outlook. The successive theories have become more complicated as time has gone by.

  [2] This terminology is not strictly accurate because “noumenal” is generally taken to entail “unknowable”, but the objects of physics are not unknowable. Still, the terminology may be helpful in capturing the structure of the position.

  [3] It is true that we should not multiply mysteries beyond necessity, but necessity sometimes requires that we face up to mysteries.

Share

A New Riddle of Induction

A New Riddle of Induction

 

Suppose that tomorrow the sun does not rise, bread does not nourish, and swans are blue. Does that show that nature is not uniform, that the past is not projectable to the future, and that induction has broken down? Can we conclude that what we observe tomorrow does not resemble the past? Not unless we know the past—unless we know that the sun used to rise every day, that bread used to nourish, and that previous swans were white. But memory is fallible and vulnerable to skepticism. If we are wrong about the past in these respects, then when we suppose that the future diverges from the past, we are mistaken—actually the future does resemble the past (blue swans etc). So unless we have an answer to skepticism about the past we cannot infer from an apparent breakdown in the uniformity of nature that there is a real breakdown.  [1] Given that we have no such answer, we cannot know that the future fails to resemble the past. If bread never actually nourished in the past, then its failure to nourish tomorrow is perfectly uniform and projectable from its past properties. So it is not just that we can’t establish that nature is uniform; we also can’t establish that it is not uniform. We can’t describe a situation in which we discover that the previous laws of nature have broken down, or were not laws after all, for it is always possible that we are wrong about how things were in the past. This makes the skeptical problem of induction ever harder. We can know that our predictions have been falsified, but it doesn’t follow that we can know that the future does not resemble the past, since we could be wrong about the past. Even a total failure in all our inductive predictions would not establish that the future diverges from the past. Nature might be completely uniform and yet appear to us not to be. We can’t know that nature will continue the same into the future and we can’t know that it has not continued the same.

 

  [1] There are two sources of potential error about the past: first, we might just be wrong that bread ever nourished (we have false memories); second, we might have made an inductive error about bread in the past, inferring that all past bread nourishes from the limited sample of bread we have encountered (maybe the uneaten bread was poisonous). If we make the latter error, our observation tomorrow that some bread is poisonous actually follows the way bread was in the past, so there is no breakdown of uniformity. 

Share

A Model of Language Acquisition

 

 

A Model of Language Acquisition

 

 

Psycholinguists report that the child “internalizes” the grammar of his or her native language. Beginning with an innate schema of universal grammar (UG), the child hears the speech of adults and somehow extracts the rules that govern the particular language in question. That heard language is external to the child’s mind, but it becomes internalized as the language is gradually acquired. At some point the acquired language is externalized in the form of overt speech, as the child’s inner competence gets expressed by means of a sensorimotor system. We have internalization followed by externalization. But what kind of internalization is this—in what form is the outer language internalized? A natural answer is: memory. The child remembers what he or she has heard, suitably processed and generalized, and acquires the ability to speak by using these memories. Memories are internal, so that is the form the internalization takes: outer speech is internalized in the form of memories. The child possesses an innate internal UG combined with a memorized internal PG (particular grammar)—and also a lexicon of some sort, innate or acquired. Innate schema plus memory equals linguistic competence.

            I want to enrich this picture somewhat. In addition to internal memories, I want to say that language acquisition involves inner speech: the child first learns how to speak inwardly, only subsequently expressing his or her linguistic mastery externally. So the internalization involves becoming a linguistic agent—a speaker. It is not just a matter of acquiring memories of what is heard, but also of acquiring an ability to engage in internal speech acts. Memory is presupposed in this, but it is not all that is going on internally. When outer speech develops inner speech is hooked up to a sensorimotor system, typically hearing and oral action (but in the deaf it can be vision and manual action). The child does not go directly from hearing a language and remembering it to being an agent of external speech; she takes the intermediate step of acquiring internal speech, a type of purely mental action. So the internalization consists of more than stored memories; it is full-blown internal linguistic agency. The external speech of others is internalized in the form of internal speech in oneself.

            The psychological structure here may be compared to mental imagery. A person perceives an external object, forms a mental image corresponding to that perception, and then acts on the basis of the image (saying by drawing a picture of the imaged object from memory). This involves an extra step beyond merely perceiving an object and acting on the perception: an additional psychological layer is introduced. It is apt to describe the process of image formation as a type of internalization: an external object is internalized in the form of an image (not just a perception). The internal image acts as a kind of replica of the external object. Likewise, someone may hear a piece of music and retain the tune in memory, rehearsing it in her mind silently: this is not just storing the tune in memory, but also actively engaging in musical performance internally. We can think of this as inner musical action analogous to outward musical action like singing or playing an instrument. Learning to sing or play an instrument will typically involve developing the ability to perform inwardly—inner musicianship, we might say. You don’t just hear the violin with your ears and then play it outwardly; you also hear it inwardly and rehearse inwardly. You have internalized the (sound of the) violin. If someone lacked the ability to perform inwardly, they would presumably lack something important to learning the instrument. We might say that “musical imagination” is an important (essential?) component of musical ability. Imagination is “subject to the will” (as Wittgenstein says) and musical imagery is as willed as other forms of imagery. You can whistle with your mouth or you can whistle in your head. And there are other forms of internalization that proceed in much the same way—for example, internalizing a set of moral commands. It isn’t just that the child hears the moral commands of adults and commits them to memory, thus acquiring moral competence. He or she also incorporates these commands into an internal moral system—commonly known as conscience. Freud took the superego to consist of internalized parental commands—telling the child what to do and not do. This was taken to be essential to moral development: not just remembering what others have commanded but also commanding oneself—the “voice of conscience”. Whether Freud was right about the details doesn’t matter: what is important is that moral development involves the internalization of moral prescriptions—you tell yourself what to do (a form of inner speech). So: imagery, music, and morals incorporate this kind of strong internalization–as well as language. They are not like merely memorizing the dates of battles or the capitals of countries, because they involve inner action analogous to outer action. In particular, language acquisition goes through a stage of acquiring a highly structured set of internal abilities generating inner speech acts. Conceivably it might stop at that point, never progressing to the next stage of acquiring an ability to communicate—a language dedicated purely to thought. Language acquisition is not just a matter of stimulus-memory-response, but of stimulus-memory-inner action-response. To put it baldly, the child primarily acquires inner speech, which may or may not lead to outer speech.

            This is an empirical hypothesis. I don’t know if it is true of actual human children. It certainly could be true of logically possible children, and it fits with the fact that children do acquire both inner and outer speech. Investigators would have to examine language development to see whether there is evidence that inner speech is acquired before outer speech. That might not be so easy to determine, given that inner speech is silent and invisible. But we could observe whether the child engages in self-directed monologue or shows signs of internal contemplation. Perhaps such investigations have already been undertaken: I am merely suggesting a plausible-sounding model that might or might not receive empirical confirmation. What I do think is that such a model would fall foul of traditional behaviorist prejudices and so might not be taken as seriously as it should; and also that it fits a general conception of learning that has many merits—the idea of learning as internalization in a strong sense. I gave several examples where such internalization operates and the case of language seems a natural addition to the list. The alternatives to the hypothesis are that inner speech develops in tandem with outer speech, but does not precede or enable outer speech; or that inner speech is the internalization of the child’s own outer speech.  Obviously these are empirical questions, but the hypothesis I offer seems to me antecedently at least as plausible as the others: inner speech is the mechanism whereby outer speech develops, not merely something additional to it or the result of it. For it provides a psychologically natural way to construct linguistic competence: first master language internally without worrying about how it will be publically expressed, and only then search for a way to link linguistic competence with the body—whether the mouth or the hands, the ears or the eyes. For example, if you are mute but not deaf, you will naturally acquire a language by internalizing what you hear, but you will not externalize it by using your mouth. The ability to engage in communicative speech goes significantly beyond merely mastering grammar and vocabulary, which can be done purely inwardly. I imagine the child hearing outer speech, rehearsing it in his head, acquiring the ability to form internal linguistic strings, playing with these strings inwardly, and only later wondering how best to express his burgeoning thoughts to others.

            This picture fits well the idea of language as primarily a vehicle of thought not communication. If language is mainly a medium of thought, its natural form of existence is as an internal symbolic system, silent and solitary; no need to recruit bodily organs that can produce externally observable signals to others. So the child first internalizes outer speech to aid it in cognitive processes—employing a language of thought—and then uses what has been so acquired to lever external communicative speech into existence. First we have symbolic thinking, then symbolic communication—the inner as the foundation of the outer. UG is already internal and intrinsically unconnected to communication; so PG can occupy the same psychological territory—an internal system dedicated initially to thought. Silent speech is the natural medium for thought, so it develops first; only subsequently do noise and gesture enter the picture to permit language to be used for speaking to others. The larynx is very much a Johnny-come-lately. The speech centers of the brain make contact with the larynx late in the game, and might not make contact with it at all. After all, if people had no use for communication, they would still need a language to express their thoughts inwardly: a language of thought has a point even if a language of communication does not. Granted that language enhances thought, silent speech is the way to go, the noisy kind being redundant if communication is not on the menu. You are going to need a language to enhance thought no matter what, so you might as well get that under your belt as soon as possible; how far you will need it for communication is a far more chancy affair and can be left as a secondary accomplishment.

            Inner speech is certainly a reality of adult linguistic life. For solitary individuals it constitutes most of linguistic life, and even for the very social it rumbles as ceaseless background chatter. It also mingles with outer speech in myriad ways. An interesting question is whether inner speech regularly precedes outer speech: do we first say it inwardly and then give it outward utterance? We (our brains) certainly plan what to say before engaging the larynx, constructing in silence a pre-formed string of words (often only milliseconds before the utterance). This is a form of inner speech, the production of symbolic strings independently of external manifestation, and it precedes external speech; so we can say that adult outer speech is subsequent to inner speech, expressing what existed antecedently. In the child language acquisition proceeds from inner to outer too, according to the hypothesis: outer spoken language externalizes a prior inner language. This is certainly contrary to the behaviorist assumptions of nearly all psychology (and philosophy) in the last hundred years, but being contrary to that tradition is surely a mark of truth in these more enlightened times.  [1] The whole point of the mind is that it cannot be observed; any theory of its achievements should respect that fact.

 

Colin McGinn              

              

  [1] The idea that the primary reality of language is its appearance in outer speech is shared by nearly all approaches to language in the last hundred years, but it is belied by the simple fact that inner speech is common and arguably basic. Language is essentially larynx-independent, sub-vocal not vocal.

Share