Action and Acting

 

 

Action and Acting

 

Jack gets up, goes to the kitchen, opens the fridge, takes out a beer, pops the cap, and drinks it. Why did he do that? Because he wanted a beer and thought there was one in the fridge. The philosopher says that Jack’s action is explained by his having a desire for a beer and a belief that this course of action will bring about the satisfaction of that desire. The action fits the desire via an instrumental belief. The belief-desire pair constitutes the agent’s reason for acting; some say it causes the action (others deny this but still hold that the action is explained by a belief-desire pair). Isn’t this plain common sense? You want something, you figure out a way to get it, and you act based on those two factors. That all sounds very reasonable and convincing: actions are explained by the agent’s having desires and beliefs that lead to the action in question. This is what folk psychology is all about.

But consider an actor’s actions. John is sitting on a sofa on a stage with an audience in front of him. He gets up, walks across the stage, opens a fridge, takes out a beer, uncaps it, and drinks the contents. Why did he do that? Was it because he fancied a beer and figured the fridge would contain one?  No: he had no desire for a beer, and did not form an instrumental belief about how to satisfy such a desire. So John appears to be a counterexample to the classic story: his action, though just like Jack’s, has no such explanation. Yet it was intentional, intelligent, and motivated. John might even hate the taste of beer; he was merely pretending to desire a beer and acting so as to satisfy that non-existent desire. Pretending to want a beer does not entail wanting a beer (maybe the opposite), so John’s action cannot be explained in the way Jack’s was. Can it be explained by any belief-desire pair? Maybe this one: he desired to give the impression that he desired a beer and he reasoned that by acting as he did he would give that impression. This cannot be quite right, however, because then he wouldn’t be surprised if a member of the audience handed him a beer—after all, he contrived to give them the impression that he wanted one. He pretended to want a beer in a setting in which such a response is contraindicated. We needn’t go further into the precise nature of John’s psychology, noting simply that he was engaged in an act of pretense in which he desired to give a certain impression. The point is that the impression he desired to give is a false impression: he had no desire for a beer. His action is explained (according to the belief-desire model) by another desire—the desire to be perceived in a certain way. Thespian action, then, is different from ordinary action, requiring a wrinkle in the explanatory apparatus. It is governed by a special sort of desire—the desire to be seen in a certain way, as an actor portraying a character. The belief-desire theorist thus breathes a sigh of relief that his preferred model covers the case of the actor’s actions (though there may be a lingering disquiet). The actor just has a funny sort of desire.

Theatrical action is not confined to the conventional setting of the stage. People often contrive to give the impression that they have desires they don’t have (or don’t have desires they do have) for motives both innocent and nefarious, thus inviting an explanation that gets things wrong. I may want you to think that I like you and wish to spend time with you, while all along hating your guts: in such a case my actions are explained by my wishing to give you a false impression of my true feelings. I don’t spend time with you because I desire to but because I want you to think that I desire to. This kind of point has prompted some theorists of human psychology to propound a theatrical view of human behavior in a social context. We all know Shakespeare’s line about the world being a stage and we being merely players on it, and Erving Goffman did much to entrench this view of social interactions. I won’t go into the reasons for holding this view, merely observing that if it is true then a great many of our actions are like an actor’s actions. We perform roles designed to convey a certain impression—dutiful husband, kindly professor, tough guy—without actually having the desires we project. Our actions are a front we offer to the world in order to present ourselves in a certain light, and they may not correspond to our actual desires.[1] Goffman spoke of the “theatrical self”; we may equally speak of the “theatrical agent” performing “theatrical acts”—acts of pretense, simulation, deception. Moreover, the roles we play can become internalized, so that we don’t shed them even when unobserved: your entire personality can be the result of habitual role-playing reinforced by social pressures. Maybe we rarely act according to our true desires (whatever that might mean) but rather act in such a way as to project a desired impression—even to ourselves. Suppose that were so; suppose indeed that there are people who never act on their actual desires in the manner of Jack but always act on theatrical desires in the manner of John. Everything they do is impression management guided by the desire to appear a certain way, never by what they really want. For example, someone might have a longing for beer but live in a social world in which that desire is frowned upon, so they always act so as to give the impression that they hate the taste of beer (sexual desires might provide a more obvious example). The point I am making is that the standard story of human action assumes that it is not thus theatrical, but this is an empirical and contested question. In fact, ordinary action is shot through with such histrionic elements—acts of theatrical pretense. This type of action needs to be included in any general theory of the nature of human action (animals are not similarly histrionic).

Once this point is acknowledged doubts arise about the standard scheme. Is it really true that the average human desires to give certain impressions to others? Maybe the professional actor does, but what about someone who acts so as to cover up perfectly acceptable desires that other people happen to disapprove of? In the bad old days, did homosexuals really want to act like heterosexuals? Did they have a yearning to present themselves as other than there are? Did they come home after a long day of acting straight and feel happy about their day, feeling that their desires had been satisfactorily met? The truth is that they judged their actions to be in their best interests all things considered–their true desires notwithstanding. It is simply a misuse of language to say that they desired to act straight. They desired to act according to their own sexual preferences, not according to how they were expected to act. Pretending may sometimes be socially necessary, but it is not always enjoyable. The case of Jack gives a quite misleading impression of the general nature of human action, as if we always act so as to satisfy our real desires; but often we have to dissimulate, suppressing what we really desire in order to manage social interactions. We do what we think will serve us best (though with understandable lapses) not according to what we really want. This gives a very different picture of human agency from what the standard model implies. It is not so much desire that prompts our actions as social necessity (often internalized). Action is all about maintaining self-image not the free flow of appetite. The standard model forgets that we are social beings whose actions must be tailored to fit with the demands of others.[2] That can be a strain not a release—the denial of desire, not its free expression. We often act contrary to our desires, not from them

It is fair to say that this perspective represents human motivation as more cognitive than appetitive. We must think about how we are perceived and act accordingly, not just go with the flow of internal desire. The actor is always thinking, calculating, reflecting. Hamlet is nothing if not a thinker: not for him the spontaneous expression of desire. The gay man must be constantly vigilant, constantly monitoring his behavior, for fear of exposure (in the bad old days). Our lives are burdened with such thoughts: we can’t act without thinking about appearances most of the time. Intelligent social judgment is required of us (perhaps this is why people often need to “let their hair down”). So the correct model is not unmediated desire spilling out into action but tightly controlled judgment about what is best socially. This makes a cognitivist view of moral motivation less exceptional—more the standard case. True, moral action is not like going for a beer when you feel like one; it is more like judging what would be best for you from a social point of view. Genuine desire can be the cause of action, but very often the cause is something more cognitive and ratiocinative. Value-directed reasoning is the normal case. Self-control is the rule, given that we are actors on a stage, not giving vent to what we happen to be feeling (the professional actor must often suppress his actual feelings on the night in order to turn in a decent performance). Action is rarely desire made visible, but more desire filtered and disguised. Even Jack as he heads for the fridge is wondering what his mother would think of all his drinking, resolving to put on a good show of sobriety when next they meet; perhaps he even pictures seeing her in his mind’s eye and stays where he is on the sofa (he’s already had a few). He must play the part of a responsible drinker not someone who simply can’t resist the booze (alcoholics are notoriously fine actors). Even Jack is a skilled thespian, the rival of John (“No thanks, Mum, I’ve had two already”). Do we everact on a desire and not wonder what people might think of us? Our social role is always part of our practical reasoning, even when acting alone. That was the insight of Shakespeare and Goffman: we are inescapably social beings assiduously managing our image. We are not solitary creatures free to act on whatever we feel like without considering the opinions of others. The standard belief-desire model is unrealistically utopian, picturing us as isolated beings free to express whatever desires we may have. In reality our actions are always socially mediated, even if only notionally. It’s always: how would this look?

There is another respect in which the standard model is unrealistic. When Jack goes to get his beer he performs a large number of actions: his action of getting a beer consists of a series of sub-actions, such as putting his hand on the fridge door. How far down this subdivision can go is an interesting question, but let’s stop at the relatively molar level. Now does Jack have sub-desires corresponding to these sub-actions—did he desire to put his hand on the fridge door? Well, this is an action that could have occurred outside of the sequence of actions Jack performed in obtaining his beer, so presumably there is a desire that corresponds to it. That is the standard model: for each action there is a desire-belief combination corresponding to that action. But did Jack really desire to put his hand on the fridge door? He might have been indifferent to it, or he might have been actively opposed to it—perhaps he is abnormally sensitive to cold. In general friend Jack doesn’t like making an effort at anything, including getting a beer inside him. It is true that he judges that it is necessary to achieving his goal that he should put his hand on the fridge, but it is a stretch to say that he desires to do this. In general it is not plausible to suggest that we desire to do every part of the means we employ to obtain a given end. Means are just undesired necessities. People don’t generally want to study; they do it because it is a necessary means to obtaining an end that they do want. But then it is not true that every action is explained by a corresponding desire. These sub-actions are explained by a cognitive state to the effect that this is a necessary part of the means to a desired end; they are not themselves desired. You might try saying that Jack’s action of clutching the fridge door is explained by his desire for a beer, but that doesn’t explain the specific character of the sub-action in question. Desire may initiate the process and explain its whole existence, but it doesn’t explain the details of the process—belief does (or something like it). But then there are actions that are not explained by a desire, since the overall desire can’t explain them. We simply don’t always desire what we intentionally do, though we may judge that the action is necessary in the circumstances. Don’t we often desire not to do what we do, though deeming it necessary given our other goals? If you ask a person digging a hole whether he wants to dig a hole, he will likely say no, but then point out that it is necessary if he is to reach the water hidden underground. He wants to get the water but not to the dig the hole that exposes it. Jack may in fact think it’s a pain in the butt to go to the fridge, but how else is the poor man to get a beer? Indeed, Jack may not really want a beer at all (!) but rather thinks it is a necessary means for relieving his indigestion (my own mother used to drink Guinness in order to put on weight not because she liked Guinness). Lots of the time we act so as to achieve distant goals without desiring to perform the actions necessary to achieve them. Shall we say that we desire to stay alive (or experience pleasure) and this explains all our actions when combined with suitable instrumental beliefs? That would be a reductio of the standard model not a vindication of it. The attraction of the theory is that it offers to explain each action by means of a distinctive belief-desire pair, but this breaks down for complex actions. We don’t desire to do an awful lot of what we in fact do.

The picture that emerges is that judgment plays a far greater role in human action than has often been supposed. It is not that existing desires trigger actions with a little help from beliefs; rather, judgments about what is best are the main determinants of action. This is true for actions geared to social relations as well as for actions that make up sequences of actions aimed at achieving certain ends (the ends might not be desires either but value judgments). Desires can play a role but they are generally mediated and filtered, suppressed and dissembled, not given free rein (or reign). As agents we are much more cognitive creatures than appetitive ones (though these categories are themselves rather too simple); thought is the main engine of action. We think about how our actions will be perceived by others, and we think about what we must do in order to achieve our aims—neither of which has much to do with our actual desires. Action is more embodied thought than embodied desire (in humans anyway[3]).

 

Colin

[1] Even when we are acting on our actual desires, say when eating lunch, we are conscious of the impression we make on others (though they might not be present), so there is always the influence of an accompanying higher-order desire to create a good or passable impression. You desire that your desires not be expressed repulsively or awkwardly or embarrassingly. You monitor your desire-satisfying actions for their social impact. Hell, as Sartre remarked, is other people. Or, as Freud observed, all our actions are subject to criticism from an internalized parent: we are acting a part to gain parental approval. We must always please a demanding audience.

[2] Can we imagine creatures that literally never act on their desires but always shape their behavior to fit social expectations—pure politicians, as it were? They are always insincere, dissembling, repressing their real desires. So do they never eat or relieve themselves? Suppose this is done for them so that no action on their part is necessary: then it seems that their behavior could be entirely governed by other-regarding desires to the effect that they create a good impression. It’s wall-to-wall theater from morning till night, pure pretense. This is a far cry from the standard model. These creatures have desires, perhaps very much like ours, but they never ever act on them. Their actions are never explained by their desires (save the desire to create a good impression in others—which may not be a real desire anyway but simply arise from fear). It is logically possible never to do what you want but to act nonetheless.

[3] My pet African lizard, Ramon, agrees, seeing a stark contrast between his reasons for action and those of his keeper. He just bites at his lettuce whenever he feels like it without regard for what anyone else might think of him, whereas his keeper has to consider how his actions will be perceived by others. Ramon is no actor; I perforce am. Do I wish I had his freedom of action? You bet I do.

Share

Manners and Morals

 

 

Manners and Morals

 

The topic of manners, good or bad, is neglected in philosophy, receiving scant attention in moral philosophy.[1]Perhaps it is felt to be trivial compared to the weighty matters of morality. But I think the topic is not without philosophical interest and I propose to explore it programmatically. First, what is meant by “manners”? The OEDprovides some useful hints: “manners” is defined as “polite or well-bred social behavior”. Turning to “polite” we find “respectful or considerate of other people” (with the word deriving from a Latin word meaning “polished or smooth”). Then “respect” is rendered as “due regard for the feelings and rights of others”.[2] Clearly the notion is normative and redolent of moral notions: good manners consist of correct actions in relation to other people concerning their feelings and their rights—actions of respect and consideration. For example, it is good manners not to interrupt people when they are speaking: this is something that a well-mannered person does not do, because he should not do it. It is good manners to greet people when you meet them and to signal to them when you are leaving, not to raise your voice unnecessarily, and to consider their feelings with respect to their appearance and deportment. In some cultures bowing is considered good manners, in others smiling is regarded as polite. Not to act in such ways is regarded as reprehensible, mildly or strongly. Children are therefore taught to behave politely.

Moving beyond the dictionary, I would say that acting politely involves three main elements: it is theatrical, symbolic, and self-referential. By “theatrical” I mean that good manners are a type of performance akin to acting on a stage: this is why they are often ritualized and stylized, and people can vary in their ability to act politely. You have to put on a good performance—it is no use giving a half-hearted bow or emitting an inaudible “hello”. In previous ages good manners were often quite elaborate, requiring much training and practice, especially court manners, or how to “treat a lady”. Even now people being presented to the Queen have to execute a series of theatrical maneuvers in order to conform to protocol. Professional actors can be expected to have excellent manners. Manners often require pretense, since one may not particularly like or approve of the person towards whom good manners are expected. The polite action may not be sincere; indeed good manners are supposed to counter the effects of social hostility or coolness. Good manners are a front we present to the world akin to the theatrical self, as explored by certain writers (Shakespeare, Erving Goffman). The smooth operator is above all a talented thespian. By “symbolic” I mean that the polite act is intended to signify something, namely that the agent is a trustworthy and safe person to deal with. The hearty handshake and accompanying steady eye contact are intended to symbolize a person who is respectful and considerate, not a shifty customer who can’t be trusted with the family jewels. Again, this is why good manners tend to be stylized and codified, like a kind of language of respect and consideration. The bow performs no genuine service, but it indicates a certain kind of reliable and deferential individual: it is symbolic. It needs to be decoded, and will not be if the recipient is unfamiliar with the culture in which it occurs. Good manners are signs, signals, messages, declarations. Third, polite behavior is self-referential in the sense that it is intended to be perceived as such: the agent wants the audience of his performance to understand that he acting politely. I don’t just intend to act in a well-mannered way but also to be seen to be so acting. Moreover, I intend that my audience should recognize this intention (shades of Grice): I want my audience, before whom I am symbolically acting, to grasp that I am intending to treat them politely. It is not necessarily so with moral action: here it is not essential that the recipient should grasp that the action was intended morally (he may not even know that he is the beneficiary of any moral action). Thus the polite person must act conspicuously politely (it’s no use bowing behind a curtain) so as to make his intention plain. Good manners thus require the ability to project good manners—to make them evident, salient. So manners require a fairly complex set of intentions as well as theatrical skill and a grasp of symbolism. We are not born knowing these things but need to have them inculcated—hence all those etiquette textbooks of yore and costly lessons in the art of behaving in “polite society”. Miss Manners earns her keep as an instructor in the Theater of Symbolic Good Impressions. Good manners are not for ignoramuses.[3]

Moral action does not have these characteristics: it isn’t essentially theatrical, symbolic, and self-referential. When one person benefits another or keeps a promise or tells another the truth this is not a theatrical performance intended to symbolize something meritorious about the agent: it is the fulfillment of a duty, an act with real consequences, an instance of practical reason. It is not a type of play-acting calculated to create a favorable impression (this is not to say that agents never do this in the guise of acting morally). It is not merely good manners to give money to charity or to treat other people fairly. An unethical person is not one who needs to improve her manners (her manner isn’t the problem). This connects with two other features of manners that distinguish it from morality. First, good manners are not appropriate for animals and small children: we don’t have to treat our pets and babies courteously. Why? Because they don’t understand the symbolic theater of manners: good manners are lost on them. By all means treat them kindly, but there is no need to worry about hurting their feelings by social snubs or snobbish behavior (or even by leaving the house without saying goodbye). Good manners require the recognition of good manners, but moral behavior does not. We don’t need to be instructed in how to treat a dog politely at a social event. Second, good manners do not extend to ourselves: I don’t need to watch my behavior in connection with myself in case I offend myself by a lapse of politeness. Good manners are essentially other-directed: they concern social behavior not solitary behavior. I don’t need to be taught the correct way to address myself.[4] Personal hygiene may be a courtesy issue in interaction with others, but it is not impolite of me to eschew deodorant on a lone trip. I don’t have to avoid being rude to myself. Again, morality is different: I do have duties to myself as one person among many, not merely to other people. Prudence may be understood as self-directed morality. When I act so as to benefit my future self I am acting rationally and morally, but it would not be rational or moral to put on a good performance to myself of consideration and respect. Good manners are an effort to give a positive impression of myself to others and to make them feel at ease, but I don’t need to convince myself that I am a solid sort of chap; I don’t need to manage my perception of myself by deft indications of decency. I can interrupt myself in mid-sentence without incurring any self-censure regarding my manners.

Now I can discuss the question of the relation between manners and morality. I suspect I am not alone in being ambivalent about the claims of proper etiquette. On the one hand, it seems like a pretty suspect sort of business: all that contrivance, self-consciousness, self-advertising, insincerity, and brand promotion. And correct etiquette is certainly no substitute for sound morality. Just think of its associations with social rank, snobbery, the caste system, sexism, etc. Hasn’t the emphasis on good manners done more harm than good? Hasn’t it had a tendency to displace real morality? Who wants to go back to the days of, “Kind sir, may I have the honor of extending to you an invitation to partake of a libation?” and suchlike rigmarole? Must ladies be stood up for whenever they enter a room and be deemed incapable of opening doors? Must the rude rustic be condemned as a lesser being because of his rough country manners? The whole artifice can seem like a relic from the past that we could well do without. Isn’t a more relaxed view of manners more conducive to human happiness? And wasn’t it always more about social acceptance and self-advancement than genuine concern for others? Away with manners! Let morality suffice to govern human interactions—doing your duty, maximizing happiness, that sort of thing. No more bowing and scraping, but plenty of helping and giving. On the other hand, isn’t the core concept of good manners really an instance of sturdy morality? How could it be wrong to respect the feelings and rights of other people? Isn’t politeness a means to that end? It might be objected that it is really just putting on a show of such respect, a kind of pantomime, not actually making sure that those feelings and rights are respected and protected. How does bowing ensure that someone is not assaulted or wrongly imprisoned or slandered? But isn’t the show itself a valuable thing? Don’t we need to see that people care about us as beings with feelings and rights? Isn’t this a kind of social cement enabling us to function harmoniously together? Good manners are a kind of assertion of the importance of morality without themselves being morality. When you behave politely you are saying “I am a moral being” and people need to hear that. Of course, such statements can be deceptive, which is why manners can aid the villain, but they are nevertheless important ingredients in a social network. Good manners are pleasing precisely because they reassure us that morality is still in force (even if deceptively in some instances). When you stand up when a lady enters the room are you not indicating by your action that you would not sit idly by if she were in mortal danger? When you say hello to someone aren’t you letting him know that you respect him as a human being—that he is not just a piece of furniture to you? This may not be the same as actually doing something just and good, but it’s something—it’s a step in the right direction. At least you are acknowledging that you have duties towards the person in question. So manners may not be morality but they are an indicator of it—they are not an entirely separate sphere of human activity.[5] If someone shows you consideration by politely welcoming you in, they may show you consideration when things get challenging. So we shouldn’t jettison etiquette just because of its abuses and absurdities; it plays an important moral role as a symbolic recognition of the claims of morality. You may feel slighted when someone doesn’t remember your name or ignores you at a party, despite knowing that no material harm has been done to you thereby; but this isn’t irrational oversensitivity because such impoliteness indicates a person who is unlikely to treat you considerately in the event of a fire or a fight. It may not be true that “manners maketh the man”—only morality can do that—but it is true that manners indicateth the man. At least the solid core of manners has that function, putting aside all the silly rituals that are used to put down one sort of person in order to elevate another. Immoral etiquette does not rule out a morality-driven etiquette. Looking down on a stranger who doesn’t know our mannerly ways is no doubt deplorable—a case of really bad manners on our part—but it isn’t wrong to teach good manners as a token of good morals. It is just that manners should never become detached from morals, a kind of elaborate theatrical game designed to weed out the not “clubbable”; manners should be the servant of morals, never its rival. In other words, manners are a tool to be wielded responsibly, not a hammer with which to crush people socially. The ambivalence I mentioned is not unreasonable, but it is possible to preserve what is valuable in good manners while rejecting its worst excesses. I myself am fond of the bright and graceful hello, as well as the slightly melancholy but hopeful goodbye. I also like to see to it that my guest is seated comfortably without the sun in her eyes, and I make a point of not interrupting her verbal flow. It’s not much, I know, but it serves to convey my respect for the guest’s feelings and rights. So on balance I am an enthusiast of good manners, though I am sensitive to its pitfalls, and would never prefer it to morals.[6]

 

[1] How would the standard types of normative ethics treat manners? Presumably the utilitarian would say that manners are good or bad according as they increase or decrease total utility or something of the sort. On this account they may turn out to be immoral, since poor manners (by some standard) often lead to unjust discrimination and consequent suffering. Deontological ethics would need to include a specific set of duties listing all the forms of politeness that exist. Interestingly, no such thing is ever attempted, and standard theories don’t even include manners as belonging to our moral duties. Generally, normative ethics steers clear of the ethics of politeness (though I am sure someone must have talked about it).

[2] The word “courteous” is defined as “polite, respectful, and considerate”, and we learn that it derives from a Middle English word meaning “having manners fit for a royal court”. Today the word has lost its royal connotations but survives in humbler environs such as shops and buses. The vocabulary surrounding this universal human institution seems notably thin and lacking in descriptive power (the French word “etiquette” had to be adopted rather late in the game).

[3] There is no name for the field of study that focuses on good manners, politeness, or etiquette—nothing analogous to “ethics” or “morality”. My suggestion for such a name is “politics” but pronounced like “polite-ics”. Admittedly the written form is easily confused with another field of studies with that name, but we can remind ourselves that the words “polite” and “politic” have different roots: the former comes from a Latin word meaning “polish” or “smooth”, while the latter comes from a Greek word for “city” (“polis”). In any case, we do well to have a name for this neglected field of study and I think “politics” will do nicely, properly pronounced (po-light-ics). We can then form derivatives such as “politically correct”, using the recommended pronunciation.

[4] A lot of etiquette concerns the proper rules governing polite speech—not too loud, no profanity, no mumbling, speaking only when you are spoken to, etc. But inner speech is subject to no such prohibitions—the idea of impolite inner speech sounds like a category mistake.

[5] Might it be that we are far less polite than we should be—as it has been argued that we are far less moral than we should be? Is a form of skepticism possible that questions our normal politeness assumptions? Is our perception of the norms governing polite behavior radically mistaken? The idea seems preposterous, but perhaps something can be made of it. Maybe we should be far more attentive to our guests than we are.

[6] I was tickled to discover recently that Philip Stanhope, the Fourth Earl of Chesterfield, shared my view of laughter as bad manners, especially when loud and “merry”; we both, however, thoroughly approve of smiling as an instance of good manners. See his Letters to His Son on the Art of Becoming a Man of the World and a Gentleman(1774).

Share

Rigidity Revisited

 

 

Rigidity Revisited

 

A rigid designator is one that designates the same object in every possible world. Thus “Plato” designates Plato in every world; in no world does it designate anyone else. We must hasten to add that names are only rigid with respect to a language, i.e. under a particular assignment of meaning; no name is rigid in virtue of being the sound or mark that it is. Words are conventionally attached to meanings, so that they only contingently denote whatever it is they actually denote. Clarity might be served by saying that the meaning of a name is what is properly rigid (similarly the meaning of a description is what is properly non-rigid). The meaning or sense of a name rigidly designates its reference. It doesn’t follow that the mode of presentation associated with a name is rigid, if that concept it taken qualitatively, i.e. how the reference seems to the speaker. And that would not be a plausible view given that numerically distinct objects can appear the same way. Nor do the ideas in the speaker’s mind rigidly designate (this is one reason description theories of names run into trouble). Names have a special kind of meaning that ties them to their actual bearer across possible worlds. The standard view of this is that the meaning of the name is its bearer, so that constancy of meaning guarantees constancy of bearer, by virtue of strict identity. If the meaning of “Plato” is Plato, then of course it designates the same person in every world, since the meaning just is the reference: this is like saying that Plato is Plato in every world. The statement “The meaning of ‘Plato’ is identical to the reference of ‘Plato’” is true, and identities hold necessarily. Nothing like this can be said of definite descriptions, so they fail to be rigid. We could say that the general terms forming the description rigidly designate the properties they actually designate, but not that the description rigidly designates the object that contingently satisfies it.

So far, so orthodox: but now I want to raise the question of what kind of necessity is in play here. We could rephrase the concept of rigidity as follows: names have the property of necessarily denoting what they actually denote. It is part of the essence of “Plato” (its meaning) that it denotes Plato; in no world does it denote anyone else. So names have essences just as objects have essences: Plato is necessarily a man and “Plato” necessarily denotes this man. The name “Plato” has other essential properties (remember we mean its meaning) such as that it is a name or is part of language or is not identical to the name “Aristotle”: meanings have essences too. It is part of the essence of the meaning of “Plato” that it designates Plato—but it is not part of the essence of the meaning of “the teacher of Aristotle” that it designates Plato, even though he did teach Aristotle. So we can say that the same notion of necessity is used to characterize rigidity as is used to characterize the essence of objects—good old metaphysical necessity. We say that a person necessarily has the parents she actually has, and we can equally say that a name necessarily refers to what it actually refers to; while a person does not necessarily attend the school she actually attends, and does not necessarily satisfy the descriptions she actually satisfies. Semantic properties can be essential (or contingent) properties too. Languages are bearers of modality just as non-linguistic reality is. Rigidity is just another species of necessity.

Now I can raise the following heterodox question: is the necessity involved in rigidity reducible to other categories of necessity? Kripke gave us four categories of necessity: identity, kind, constitution, and origin. Is the rigidity of names a special case of one of these categories? The alternative is that it is not but is a sui generiscategory of necessity that we need to add to our inventory of categories of metaphysical necessity (“necessities of reference”). I am going to suggest that rigidity is reducible to the necessity of constitution plus the necessity of origin: a name’s having a certain reference essentially is the upshot of a particular type of necessity of constitution plus necessity of origin. Thus we can explain these necessities of language in terms of more general types of necessity applicable to the non-linguistic world. This may sound strange, but on reflection it is quite intuitive, once we understand how general the notions of constitution and origin are. Suppose we say that the meaning of a name is constituted by its bearer; and we compare this to saying that this table is constituted by a particular piece of wood. In the latter case it is right to say that the table is essentially so constituted—in every world in which the table exists it is made of the same piece of wood as in the actual world. Similarly, in the former case, if the meaning of the name is constituted by its actual bearer, then it is so constituted in any world, since constitution generates necessities. If x is made of y, then you can’t have x without y. Of course, you can have something that is like x that is not made of y, but not that very thing—a table that looks like x, say. Likewise, you can have a meaning that resembles the meaning of “Plato” and it not be constituted by Plato, but you can’t have that meaning without Plato. Thus two speakers may be exactly alike physically and mentally and use a name “Plato” but refer to different people by that name, because the meaning is constituted by different references in the two cases. Two meanings can seem the same but not be the same because of a difference in actual constitution—just like two tables. According to the direct reference conception of names (the “Millian” view), the meaning of a name is constituted by its bearer; but then it is necessarily so constituted, by the necessity of constitution, in which case it will be rigid. We could say metaphorically that the table “rigidly designates” the piece of wood it is actually made from, just as a name literally rigidly designates its actual bearer: the necessity of constitution is at work in both cases. And don’t object that this latter must be metaphorical because only physical objects have constitutions: clearly the concept of constitution can be applied outside of the physical realm, for example to states of mind and to mathematical entities (emotions and geometric figures, for example).[1] Identity can be applied with this kind of generality (and is often invoked to express the Millian view), and there is no metaphysical reason to restrict the idea of constitution to material objects (the Constitution is not wrongly named). Thus the referential rigidity of names falls out as a consequence of the necessity of constitution: the former follows from the latter.

How does the necessity of origin enter the picture? First we must note the generality of that notion; it isn’t just parents and children but any generative historical relation. Clearly it applies to any organism and its ancestors: each organism necessarily has the ancestors it actually has, going back to the origin of life on Earth. But also historical events fall under the necessity of origin as well as human artifacts: WWI (that war) could not exist in a world in which its actual antecedents do not exist (though there could be a war similar to it but differently caused), and no one other than Leonardo could have painted the Mona Lisa (that very painting).[2] We can’t completely change history and leave the identity of the objects and events intact. In the case at hand, a name has a certain history originating in the initial baptism of a particular object (say, baby Plato): then a chain of linguistic events connects this origin to later uses of the name. Let us then say that the name “Plato” has origin O: accordingly, it (that name) could not exist without O. The name (its meaning) owes its identity to its actual origin: just as Plato has to come from his actual parents, so the name “Plato” (with its actual meaning) has to come from baby Plato in an act of baptism. If we substitute another baby in a possible world, we get a different name (a different meaning), despite any resemblance of baby—just as we get a different child if we substitute different parents, despite resemblance of progeny. If so, rigidity follows from origin: the name could not refer to anyone not at the origin of the causal chain that exists in the actual world, i.e. the one culminating in baby Plato.  If the origin of “Plato” is baby Plato, then it could not have had any other origin, by the necessity of origin; but then the name must designate the same person in all worlds. That name requires that origin, so there is no world in which that name exists but is anchored to a different origin (as it might be, baby Aristotle). Nothing like this is true of descriptions, of course, since they are not individuated by origin at all: they don’t refer in virtue of a causal chain leading back to an object’s baptism. Accordingly, descriptions are free to be non-rigid, as flexible as the occasion demands. But names are strictly tied to their historical antecedents in babies, baptisms, and the like. If so, rigidity follows from the necessity of origin, and is a special case of it.

There is a question, then, about which of these two necessities is basic in the modal semantic of names. We need not take a firm stand, but I am inclined to think that origin is basic: it is because names are introduced in the way they are that their meaning is constituted in the way it is. Having that origin determines what constitutes the name’s meaning: there is nothing else for the meaning to be given that names originate as they do. Tracing back to a particular object is what fixes their meaning (not a cluster of associated descriptions), and hence we say that the meaning is constituted by the object. So origin is primary, though both are equally correct as modal claims. The important point is that semantic rigidity is not some new type of necessity but is a special case of necessities already recognized. We don’t have Kripke’s four categories and the necessity of reference as an additional primitive category; the latter is an instance of the former. It is what the necessity of constitution and origin look like when manifested in language. This is good because it is not clear what else referential necessity might be given that we seem to have covered the bases with the four categories.[3] Rigidity is a type of essence found in language, but what other types of essence are there other than the big four? They seem to exhaust the field, in which case linguistic essence needs to emerge as a form of one of more of those. Clearly the concepts of constitution and origin apply to language, and are so employed quite spontaneously by theorists, so it is in the cards that we can explain referential necessity by appeal to these concepts. Referential necessity thus arises from a combination of the necessity of constitution and the necessity of origin.[4]

 

Colin McGinn

[1] It can also be applied to phrases and sentences: a string of words is constituted by the individual words that compose it. As a consequence, we can say things like, “The sentence ‘snow is white’ is necessarily constituted by the words ‘snow’, ‘is’, and ‘white’”. The same applies to thoughts and their constitutive concepts.

[2] We can also define the notion of rigid portrayal: a painting rigidly portrays a certain individual if it portrays the same individual in every possible world. The claim that paintings are sometimes rigid portrayers is plausible: the painting must portray the same individual in any world in which it (that painting) exists—no Mona Lisa no Mona Lisa painting (her twin will not do). This is different from a painting just happening to fit a certain individual. In this kind of case the origin theory of the necessity seems very plausible; so the Mona Lisa painting needs both Leonardo and Mona herself in order to exist in any world.

[3] It might be said that there are strictly five categories: in addition to constitution by a particular object we need to recognize the type of the constituting object. Thus we can say that this table is necessarily made of wood not just this piece of wood (as is the piece itself). The analogue for names would be the fact that the meaning of the name is necessarily of the human being type: the name “Plato” must refer to a human being and not (say) to a goat, in addition to necessarily referring to Plato (who is himself necessarily a human being). But this isn’t to accept that there are irreducibly semantic types of de re necessity. The name itself might be necessarily composed of certain sounds, which are necessarily sounds. We just have iterations of the same metaphysical necessities we had before we got to the de re necessities of language.

[4] If we choose to say that predicates rigidly designate their corresponding properties, we can give the same type of explanation of this semantic fact, namely that the properties denoted form the constitution and origin of (the meaning of) their denoting predicates. The property of being red constitutes the meaning of “red” and is the origin of that predicate (in its actual meaning): hence “red” rigidly designates the property red. The meaning of “red” could not be constituted by any other property or originate in any other property.

Share

Problems of Philosophy

 

Problems of Philosophy

 

 

Russell called his “shilling shocker” The Problems of Philosophy, and there is a reason for that title: philosophy consists of a set of problems. The same is not true of other subjects: physics, chemistry, geology, biology, psychology, history, economics, English literature, etc. In these subjects a certain sector of reality is selected and investigated, discovering the entities, processes, and laws of that sector. It is true that problems arise during these investigations, which might impede the progress of investigation, but none of these subjects consists of problems—and the problems that do arise are not like philosophical problems (see below). Philosophy is not concerned with any specific sector of reality, ranging over all of reality, but only with the philosophical problems that arise in anydomain (hence philosophy of physics, philosophy of psychology, etc.). It goes from ethics to metaphysics, language to space and time, art to science, politics to logic. Philosophy is concerned with a certain type of problem not a specific aspect of reality. And no other subject is like that: it is what distinguishes philosophy from every other area of human inquiry. We might indeed use it to define philosophy: philosophy is that subject which deals with a certain set of problems, no matter what those problems may concern—problems of a certain distinctive character. It is not defined by subject matter, as other disciplines are, but by the kind of intellectual activity it invites—problem-solving activity. This is why an introductory textbook in physics or psychology or geology will not have a title of the form The Problems of X but rather something along the lines of The Discoveries of X. It is also why so many topics in philosophy are described as “the problem of such and such”: the problem of free will, the problem of knowledge, the problem of consciousness, the problem of the self, the problem of induction, the mind-body problem, the problem of perception, and so on. Each of these areas poses problems—questions we find it hard to answer, questions that trouble us, questions that won’t go away. They pose problems for us: we ourselves have a problem because of the existence of these problems. For the problems challenge rational thought: they put pressure on rational thought, and so they put pressure on us as rational beings. The problems of philosophy are ourproblems not just problems attaching to a certain subject matter—problems of the self, we might say. So the philosopher is in a peculiar position: he or she has no specific domain of expertise to investigate, no proprietary subject matter, but rather a roster of disparate problems that tax and trouble him or her. The philosopher is more of a trouble-shooter than a fact-collector: when queried about what she does for a living the philosopher will reply, “I solve problems” (inwardly adding, “or try to”). This is what makes philosophers different from other inquirers: we are in the problem business while they are in the discovery business. Even when our topic of interest isn’t conventionally called “the problem of X” (e.g. “the problem of meaning”) we recognize that problems constitute our daily diet—they are what we feed on. Philosophy is made of problems.

What kind of problems? Here the hand waving is apt to commence; the otherwise articulate philosopher begins to mumble. He may say airily that he is concerned with “deep problems” or problems about “ultimate reality”; or state peremptorily that he is interested in “conceptual problems”. For it is difficult to give a precise and uncontroversial characterization of the kind of problem that comprises the subject of philosophy. These problems are not like problems stemming from remoteness in space and time, as in astronomy and evolutionary biology; nor do they concern infinity or the interior of black holes. The problems have to do with thought itself: we find that we can’t think straight about something—we can’t make sense of it. It baffles us. Thus problems about space and time, the nature of the mind, the status of ethical value, what meaning is, what necessity is, how knowledge is possible, whether we can perceive material objects, and so on. I would call these logical problems, in a wide sense of “logic”: they concern the possibility of making rational sense of something.[1] I won’t go into the details of this, merely observing that the philosopher is in the business of rendering things coherent, intelligible, and clear, not confounding, confusing, and obscure. In the case of the problem of knowledge, for example, the philosopher wishes to reconcile the demands imposed by the concept of knowledge with the mind-independence of the world. So we can say that the set of problems that define philosophy are logical problems, not empirical or “factual” problems. How is free will compatible with determinism? This is a logical problem of showing that one thing is logically consistent with another (or accepting the inconsistency). Not surprisingly, then, philosophy and logic are close cousins—the philosopher is always a logician of sorts. Again, no other subject is like this—a battle against logical problems. This is perhaps why philosophy is regarded with suspicion by other academic types: it really is a different kind of enterprise from every other subject. Instead of trying to discover truths about a certain part of reality, which may or may not be actively problematic, it tries to solve the logical problems raised by any part of reality. In so doing it attempts to solve our problems in thinking about reality (so it has a therapeutic purpose). Philosophy is self-medicating in a way that physics (etc.) is not. It tries to soothe a personal unease.

How do these logical problems arise? Are they unavoidable? Could there be a type of philosophy that didn’t take this form? Two possibilities suggest themselves: either they arise from our modes of thinking about reality, or they arise from reality itself. If they arise from our modes of thinking, they should be in principle correctable; or if not correctable for us, then other intelligent beings need not be subject to them to start with. But this seems hard to accept, given their obstinacy and longevity. Then they must be objectively based: but how can the world contain such problems? Is free will itself not sure whether it is possible or not? Is there simply no fact of the matter about the nature of space and time, or consciousness, or ethical value? One feels that reality cannot itself be problematic: the universe is not a set of problems! It simply is. The problems must stem from us, from our inadequate ways of thinking, from our concepts, not from the ways things objectively are. Yet that suggests that we could escape the problems by adjusting our minds—which seems implausible. The problems seem unavoidable for any mind, but they can’t be inherent in reality qua problems, as atoms and forces are inherent in reality: for how can reality be made of problems? Here we have a philosophical problem about the basis of philosophical problems (a meta-philosophical problem). As to the question of whether philosophy could transform itself into something more like a regular discipline, abandoning its obsession with logical problems, the answer seems fairly clear: it could not. It is necessarily problem-centered. We could certainly decide to investigate concepts as such without concerning ourselves with solving the traditional problems, but that would not be philosophy—it would be a branch of psychology. The idea of philosophy without its problems is not the idea of philosophy. The essence of philosophy is the set of problems that define it. This is not to say that these problems could not be solved (we live in hope!) but if they were that would be the end of philosophy. Some successor discipline may emerge in that happy dawn but it wouldn’t be philosophy as we know and love it (or hate it). Philosophy without its problems is no philosophy at all. Indeed, we can think of the concept of philosophy as simply a generalization of the existence of separate logical problems: people found themselves perplexed by certain problems in disparate areas of human thought and decided to lump them all together under the heading “philosophy”. The problems came first, the discipline second. Again, this is not true of other disciplines: they are not compilations of disparate problems but unified fields of investigation (rather like fields in fact). A philosopher has no “field” in the sense of a unified area of study; a philosopher is rather a problem-wrangler roaming widely. So it is quite wrong to describe a philosopher as someone whose field of study is concepts: that misses the problem-oriented nature of the subject. Philosophy could certainly be abandoned as an area of study—simply no longer taught and thought about—but that isn’t for it to cease to have its problem-centered character. The problems of philosophy are philosophy. Even if all the problems are solved one day and the solutions tabulated in a definitive textbook, philosophy will still be about problems: how the problems that define the subject were eventually answered. The student will still need to understand the problems, feel their force, and not merely absorb the facts uncovered by the area of investigation called “philosophy”. The perplexities are part of the nature of the subject not something extrinsic to it.

Perhaps we can imagine a race of beings for which physics and chemistry consist of problems. They can’t understand how material objects are possible, or how chemical reactions are logically consistent, or what a law of nature is, or how motion can occur; they find the whole subject conceptually confusing. They make little progress with it, despite strenuous efforts. Disputes are endemic among people calling themselves “physicists” or “chemists”. Their attitude towards the physical and chemical world is like our attitude towards consciousness, free will, knowledge, etc. What subject are they engaged in? I would say philosophy, or one form of it: they have logical problems about a certain sector of reality. Their discipline tries to solve these problems, which are philosophical in character so far as they are concerned. If they finally sort these problems out and begin accumulating positive knowledge in the manner of human physicists and chemists, then philosophy for them will have come to an end. Maybe philosophy will one day come to an end for human beings (though I doubt it) once its logical problems are straightened out, perhaps by means unheard of today. Meanwhile it will continue to be defined by its problems.[2]

 

[1] In calling the problems “logical” I intend no narrowing of ambition on the part of the philosopher. I mean merely that they concern rational thought as opposed to empirical discovery—that they have an a priori character, to use the traditional terminology. No simple philosophical label is entirely suitable, though I think “logical” is perfectly justified once freed from limited conceptions of the logical (such as those found in standard logic textbooks). The OED’s “the quality of being justifiable by reason” strikes me as pretty much on target. It would not be wrong to say that philosophical problems characteristically put reason in question. The same is not true of other disciplines (except in so far as they raise philosophical questions).

[2] In teaching introduction to philosophy it is good to start with a problem chestnut such as the problem of our knowledge of the external world. This gets the student accustomed to the idea that philosophy is concerned with a certain kind of question of the form “How would you solve that?” This is not like teaching statistics, say, in which you might start by explaining the concept of the mean, or the concept of a normal distribution, and then proceed from there. You might also simply announce at the beginning that philosophy is concerned with problems—difficulties, conundrums, obstacles to rational thought. It attempts to overcome these problems, once they have been detected. It is not a recitation of discoveries, like botany or archeology.

Share

On Mind-Brain Relations

 

 

On Mind-Brain Relations

 

 

Various relations between mental events and brain events have been (and could be) posited: correlation, causation, simultaneity, supervenience, spatial coincidence, composition, part-whole relations, and identity. It is fair to report that these relations are first found outside of the mind-brain relation and then applied to that relation; we don’t come up with them by considering the mind-brain relation itself. That is, they stem from the physical world not from the psychophysical world: we transfer them from their original home in the physical world to the special case of mind and brain. Hence analogies are often drawn between the psychophysical case and physical cases—for instance, “pain is C-fiber firing” is like “heat is molecular motion”. This is already suspicious, since we are not deriving the relations from direct inspection of the mind-brain nexus: we are not examining that nexus and concluding that a particular relation is the right one to characterize it. Rather, we are extrapolating the relation from elsewhere and postulating that it applies, not observing that it does. Thus there is a big difference between our knowledge in the two cases: we can experimentally establish that heat is molecular motion, for example, but not that pain is C-fiber firing. Or, to take a more transparent case, we can observe that Superman is Clark Kent simply by seeing Clark Kent change into his Superman clothes and fly off; but we can’t do anything comparable with the claim that mental events are identical to brain events—we can’t witness the transformation. Instead we postulate an identity; we don’t discover it. It is the same with all other ordinary identity statements: we empirically discover that these identities hold. So it is not a mere theory that a is identical to b; it is an established fact, known by empirical means. By contrast, the identity theory of mind and brain is a speculative theory, a bold conjecture, not something we have empirically established to be true. And similarly for other theories of the relation between mind and brain, such as that mental events are composed of physical events or are spatially coincident with them. True, we can empirically establish correlations, but the step to identity or composition is always a move away from observation—a philosophical theory rather than a piece of empirical science. We apply relations drawn from elsewhere, but we don’t carry over the methods that are generally used to assert their existence. Thus the theories remain controversial (but no one seriously disputes that Hesperus is identical to Phosphorus or that water consists of H2O). The justification for asserting the theories tends to be abstract and general—isn’t it more parsimonious to assume identity, and good to avoid the absurdities of dualism? It is never stated that we have simply discovered by observation that pain is the same as C-fiber firing—by looking at it from different angles or by using a microscope or by tracing it over time. We discovered that butterflies and caterpillars are the same organism by observing the chrysalis stage; we didn’t just posit the identity on the grounds of Occam’s razor or fear of butterfly-caterpillar dualism.

It would obviously be better to arrive at a theory of the mind-brain relation by examining the case directly. After all, it may be that physically based relations of the familiar kinds do not apply in this case—maybe there is a special kind of relation that connects the mental with the neural. So shouldn’t we concentrate our attention on the specifics of the mind-brain nexus and work from there? The trouble with this is that nothing suggests itself. We don’t get even a hint of what the relation might be by introspecting our pains and observing our brains; we find only correlations not a theory of the connection between the correlated entities. Maybe they are identical, but nothing in what we observe suggests as much; they don’t even seem similar. At least we can see that Hesperus and Phosphorus are both planets, and that Superman and Clark Kent are both men, but we can’t see that pain and C-fibers are both anything (except maybe events); there is not even a hint of the possibility of identity (or composition, etc.). If the relation is that of identity, this remains hidden from us, not something that reveals itself to diligent observation. Why? Why does the mind hide its true relation to the brain, and hide it so well? If it is true that pains are composed of strands of C-fiber, then why is it that nothing in our experience suggests that? Why can’t neuroscience prove it? Physics proved that heat is molecular motion, chemistry proved that water is H2O, biology proved that the heartbeat is a pumping muscle, and astronomy proved that Hesperus is Phosphorus; but neuroscience is incapable of proving that pain is identical to C-fiber firing. Is that perhaps because it isn’t? Does it bear some other relation to the correlated brain events, some relation we don’t know about, or can’t even imagine?

It is possible that we are thinking about this all wrong. We shouldn’t be hunting for relations drawn from outside our area of interest—the mind-brain connection—and then postulating that such relations capture that connection. We should instead focus on the case at hand and try to forge a theory of the psychophysical nexus that respects its special character. Don’t think, look! The problem is that nothing comes to mind: we look, but we don’t find. The psychophysical nexus just stares blankly back at us, elusive and enigmatic. Here the pain, there the brain: but where the linkage? There must be some sort of intimate relation, since the two are not just accidentally joined, but for the life of us we can’t figure out what it is. Let me introduce some neologisms: let’s say that the brain state “mentalizes” and the pain state “physicalizes”—that is, they each do something that leads to the other. This doesn’t tell us how they do these things, only that they do. Then the question is what these peculiar relations involve: what theory of them is correct? Pain is such that it physicalizes itself as C-fiber firing, and C-fiber firing is such that it mentalizes itself as pain: now the question is how that happens. A formidable question indeed, and one formulated by using a pair of murky neologisms; and yet it at least points us in the right direction—what is it to mentalize and physicalize? We know what it is for Phosphorus to “Hesperusize”—to follow the same path through space and time as Hesperus does—but what is it for C-fibers to mentalize (specifically painize)? That is the question; and we will not seek to answer it by borrowing concepts drawn from somewhere else. We need concepts tailored specifically to the case at hand. Whether we can find or devise such concepts is another matter. Mysterians remain doubtful: we simply have no viable way to infer the relation from the relata. There might be identity for all we know, but the usual paradigms of empirically discovered identities provide no guidance in this alien territory, being mere impositions from outside. All we can claim is that the relation must be close, intimate, and transparent (not brute). It must be such that it takes the enigma out of the connection, and makes it something other than a mere conjecture, backed by nothing but Occam’s razor and dualism-phobia. It must be like the empirical discoveries that underlie ordinary assertions of identity (“I saw Clark Kent put on his Superman clothes and then fly off”). As things stand, however, we have no clue about how to do any of this, but merely struggle with concepts borrowed from areas less intractable, as in “You know what identity is from the case of Hesperus and Phosphorus; well, mind and brain is just like that”. But it is not just like that because we can’t apply the methods used to establish the former identity to establish the latter (putative) identity. What I tend to believe is that the psychophysical nexus is nothing like the standard paradigms, just in a different league or galaxy; and that it is completely misguided to employ the usual types of relations in an effort to understand it. It is not so much that the identity theory, say, is false as that it provides no illumination at all, because the standard cases of identity are so far removed from the case at hand. We need a completely new way of thinking if we are to get anywhere in grasping the true nature of the mind-brain connection; and it is a real question whether this new way is available to us. What is certain is that we will never achieve it if we lazily rely on concepts designed for a quite different purpose. It is as if we are trying to understand electromagnetic phenomena solely on the basis of traditional mechanics instead of recognizing that something completely different is afoot, calling for a new conceptual apparatus.[1] The very idea of using concepts like identity and composition, explained by way of the standard paradigms, is hopelessly wide of the mark, signaling desperation rather than insight. We should forget all such paradigms and start afresh, always being aware that there is no guarantee of success. But failure is better than complacent illusion.[2]

 

[1] Actually this analogy understates the case, but you get my point. We don’t just need a paradigm shift but a complete conceptual reboot, a new mind almost.

[2] I can put the point very simply: it is no use trying to construct a theory of the psychophysical nexus by comparing it to cases in which nothing mental is involved. This is merely the triumph of wishful thinking over honest toil.

Share

Scientific Knowledge

 

 

Scientific Knowledge

 

No doubt scientific knowledge is impressive and enchanting: science has learned so much of interest about the world, with many practical applications. The human brain is lucky to be able to obtain and contain such knowledge. It looks like the best knowledge on planet Earth; if there were a competition for Best In Know, it would be declared the winner. Cognitively, it is our pride and joy. And yet it has come in for criticism, especially by philosophers of science, not all of it motivated by epistemic envy. It postulates unobservable entities, which by definition can’t be detected by the senses; it uses inductive reasoning, which is not (allegedly) a valid form of inference; it has a disturbing tendency to get refuted as time goes by; it is often hard to understand, which renders it undemocratic; it takes years to learn, which makes it expensive and elitist; it is unnatural, like ballet or speaking a foreign language; and it is vulnerable to political influence and corruption. Epistemologically, it is not as fine, upstanding, and humanly accessible as one could wish, despite its undeniable interest and utility. Some have even supposed that scientific knowledge is strictly impossible: Popper maintained that we can never know a scientific proposition to be true, only that it has not so far been falsified. Our attitude to a scientific theory can only be that it has hitherto withstood attempts to prove it false, not that it is actually true. Induction is fallacious, according to Popper, so we can only justify the belief that so far we have not found a counter instance (Popper tends to be popular with practicing scientists). Others have used the speculative nature of science to insinuate that scientists are not always rationally motivated. Paradigms hold them in thrall, status matters, and scientific revolutions are suspiciously like political revolutions. Still others have declared science to be largely fictional on account of its dealings with the unobservable—all such things being on a par with fictional entities. Science is not all it is cracked up to be, according to these critics.

What has not generally been pointed out is that scientific knowledge compares unfavorably with other forms of human knowledge. Here we could mention knowledge of language, psychological knowledge, and knowledge of one’s own history and local geography.[1] We learn all aspects of our native language easily and equally (no difficulty and elitism), producing a smooth and shared linguistic competence, encompassing semantics, syntax, and phonetics, with no reliance on elaborate experiments or expensive equipment, and not subject to refutation by later research. Popper would be proud of it. We are natural knowers as far as language is concerned. Likewise, we learn our psychological ABC with ease and success, enabling us to understand and predict human behavior, with no danger of later refutation (no beliefs and desires after all!). We even have the advantage of direct acquaintance with the subject matter of this type of knowledge in the form of introspection. There is no laborious training, no nerve-wracking examinations, no inability to get it right, etc. We take to it like a fish to water (fish are very knowledgeable about water). And in the case of history and geography we have solid knowledge of the facts in question: memory tells us what we did when, and perception informs us of the local terrain. I remember what I did yesterday and I know my way home. True, this kind of common sense knowledge is fallible, but it is not the faltering and conjectural affair that science is:  it didn’t take centuries to get started, isn’t rife with controversy, and won’t get refuted tomorrow. Everyone has it, it works beautifully, and it is clear what is being said. It is nothing like quantum theory, or relativity theory, or even Darwinian evolution; in took no Newton or Einstein or Darwin to discover it, genius not being required. Thus there are areas of human knowledge that outclass scientific knowledge by objective criteria of epistemic soundness. So it isn’t that humans are generally bad at knowledge and science is the best they can manage in the circumstances; rather, science is the odd man out, being markedly inferior to other forms of human knowledge. This is not to knock science or disrespect it; it is merely to point out that among our other cognitive achievements it is not exactly stellar. We can easily imagine beings that are much better at scientific knowledge than we are, acquiring it with the ease and naturalness that we bring to language—born scientists. They might possess an innate science faculty that generates knowledge of science as our language faculty generates knowledge of language. Just as we learn a specific dialect without even thinking, they develop a specific scientific expertise without any effort or special training—grasping the far reaches of physics by the age of five and molecular biology by seven. We, on the other hand, are just not naturally equipped to master science, which is why it took so long for humans to get even a rudimentary hang of it. There had to be a concerted Scientific Revolution to get science off the ground (after a promising start centuries before), but there was never a Linguistic Revolution in which humans finally got round to speaking grammatically. We weren’t linguistic illiterates till the seventeenth century, needing the leadership of Great Thinkers before we learned how to speak properly. To put it bluntly, we are bad at science but good at language—we are to science what chimps are to language, i.e. not cut out for it. Not that science isn’t worthwhile or is impossible to achieve, but from an epistemological point of view it isn’t exactly the cat’s whiskers. Frankly, we suck at science. By all means do it, but recognize that you are in alien territory, hobbling along, ill equipped for the journey.[2]

Imagine if common sense knowledge were in the state that quantum theory is in. We don’t even know what quantum theory means, what in the world corresponds to it, whether it even makes sense. Imagine if that was our condition in folk psychology: we don’t even know whether our postulated entities exist independently of our observations, and whether mental states are particulate or wavelike, and we can only know someone’s desire if we can’t know his belief and vice versa. Maybe our folk psychology is predictively close to perfect, but we don’t know what could make it true, and it is full of paradox and perplexity. Moreover, it was only developed a century ago, so that for most of our history we had no folk psychology.[3] What then? Presumably social life would have been impossible, human behavior totally baffling, and life generally meaningless (we wouldn’t even know what happiness is). Maybe our ignorance would have led to species extinction. At least our ignorance of the true nature of the microscopic world has no such dire consequence, since it is not crucial to survival; but if it were, we would be in big trouble. If not knowing the correct interpretation of the equations of quantum theory were crucial to survival, we would have perished long ago. So biologically our scientific “knowledge” in this area is lamentably inadequate compared to our ordinary knowledge of human psychology. We really suck at quantum theory, but luckily it doesn’t matter from a biological point of view. Still, this shouldn’t blind us to the limits of our scientific knowledge. And it is not so different elsewhere in science: there are many areas of ignorance, much controversy, numerous dead ends, and lots of hesitancy. It is not so in the other areas of human knowledge I have mentioned: I know how to speak English extremely well, I have a good grasp of human psychology, and I am intimately acquainted with my past and my surroundings. I am a genius about these things compared to my struggles with science (and let’s not even talk about philosophy!). I am epistemologically rich in some areas but a pauper in this area, despite all my strivings and aspirations. My brain just isn’t cut out for it, though I salute its valiant efforts (I wouldn’t want my brain to get an inferiority complex).

Moral knowledge is interesting in this connection. You will find people unfavorably comparing moral knowledge to scientific knowledge, even supposing that appellation unsuitable for describing our moral understanding. But isn’t the opposite the case? In ethics we are not inferring entities that are too small or distant to observe, we are not hostage to inductive reasoning, and we are not struggling to overcome our natural cognitive weaknesses; we are operating with a supple and comprehensive system for evaluating conduct. Ethics is not something we invented a few hundred years ago when the time was finally ripe, having languished without it for millennia; it is a natural human accomplishment requiring only experience and a little instruction to grasp.[4]There is no need to understand calculus, for instance, or even Euclidian geometry. Moral knowledge is actually solidly based, universal, not subject to overnight refutation (I am talking about general principles not specific applications), and relatively easy to acquire. It even admits of certainty in some respects (e.g. happiness is good, misery is bad). It is quite intricate, but spontaneously acquired. It is nothing like quantum theory. You don’t need a high IQ to get the hang of it. So moral knowledge is not the ugly duckling of epistemology, outshone by the paragon Science; actually it is quietly impressive from an epistemological point of view (a bit Jane Austen-ish). Knowledge of what one ought to do is certainly a lot more robust than knowledge of whether Schrodinger’s cat is alive or dead (or the propositions of relativity theory, I would say). It is comparable to knowledge of language, as has been pointed out (Rawls, Chomsky). We know morality as we know our mother tongue.

Don’t get me wrong: I love science; I seek scientific knowledge; and I even have some of it. But from an impartial perspective it is not the glittering epistemic paradigm it is sometimes supposed to be in our scientific age. If we compare it to our motor abilities, it is somewhere between ballet dancing and mountaineering: humans can do it, some better than others, but it isn’t part of our natural endowment, what we can do in our sleep. Baboons swing in trees better than we do science, only seldom coming crashing down. Science, for humans, is an admirable attempt to do the impossible, or at least the biologically contraindicated. It isn’t what we were born to do.[5]

 

[1] Another example would be our knowledge of faces: we have an extensive and remarkably reliable knowledge of people’s faces, enabling us to recognize people at a glance. It is not a matter of theory or calculation but is automatic and instinctual. Face recognition is probably an innately given module enabling us to possess vast stores of useful knowledge. It is superior to scientific knowledge in many ways.

[2] None of this should be a surprise for a biologist: scientific knowledge is hardly a prerequisite for evolutionary success, which is why no other animal bothers with it. We are able to do science only because it is an accidental side effect of abilities designed for other tasks. This is why it is unnatural toil that only some humans engage in not a universal human ability programmed by the genes.

[3] Medicine is a good point of comparison: it is still at a rudimentary stage (we hope!) and was a disaster until quite recently. If our knowledge of language or folk psychology were like our knowledge of medicine, we would be in pretty bad shape. We do have medical knowledge, but it is hardly a shining exemplar of knowledge, though undeniably useful. Our ignorance of what causes cancer, for example, is actually quite shocking, given the effort that has gone into it. Medical knowledge compares poorly to other areas of human knowledge, which require no huge injection of funds.

[4] Imagine if ethics conformed to Popper’s view of science: we don’t know that cruelty is wrong only that the proposition that it is has not yet been refuted. That would undermine our ethical confidence horribly—we can only act as if this moral proposition has so far resisted our efforts to falsify it! Can we not even believe it? This degree of agnosticism is not compatible with a robust moral outlook.

[5] It is noteworthy that animals get by without scientific knowledge and seem none the worse for it. Yet they have plenty of other knowledge, some exceeding the human ability to know. They might regard our scientific knowledge as a waste of time, and epistemologically shoddy to boot. Perhaps God is tickled at our troubles, having mischievously given us a thirst for scientific knowledge combined with ineptness at acquiring it. Oh, how he chortles at our quantum quandaries!

Share

What is Nature?

 

 

What is Nature?

 

What falls under the concept of nature and what does not? What does the concept include and what does it exclude? The OED defines “nature” as follows: “the phenomena of the physical world collectively, including plants, animals, and the landscape, as opposed to humans or human creations”. The Cambridge Dictionary gives us: “all the animals, plants, rocks, etc. in the world and all the features, forces, and processes that happen or exist independently of people, such as weather, the sea, mountains, the production of young animals or plants, and growth”. Construed as analyses of the concept, or even as descriptions of the common use of the word “nature”, these attempts at definition leave much to be desired. First and most glaring, they exclude human beings from nature: human nature is not taken to be part of nature. This is totally arbitrary and flagrantly pre-Darwinian: didn’t we descend from apes, and aren’t apes part of nature? Even if you think humans contain a divine spark—an immortal soul—you surely accept that some aspects of human nature belong to nature (respiration, digestion). What would Martins think if they visited earth—that those funny-looking featherless bipeds are not part of nature? What then are they a part of? Second, the OED explicitly, and the Cambridge Dictionary implicitly, excludes minds from nature: it is the phenomena of the physical world that are said to constitute nature.[1] So minds are not deemed part of nature, even though the organisms that have them are. How is that defensible? Minds evolved, have a genetic basis, and function to aid survival—all the marks of life on earth: they are surely as much a part of nature as bodies. Third, the creations of humans are declared not to be part of nature either. What does this include? Dwellings, weapons, spoken language, culture, and roads would seem to be creations of humans—are they not parts of nature? Aren’t animal nests, hives, burrows, bowers, webs, and tools part of nature? But if so, why are human artifacts declared external to nature (not to mention footprints and prepared food)? Finally, there is no mention of things traditionally supposed outside of nature, particularly the supernatural. Presumably this is intended by implication, since God, angels, and ghosts are not usually thought of as “phenomena of the physical world”, but the point bears emphasis: the concept of nature is supposed to contrast with what is beyond nature—what transcends it, flouts its laws. Heaven is not a department of nature and God himself is not an inhabitant of nature; part of the meaning of “nature” is that these items are not elements of the entity denoted. Nature is what is not supernatural—what is of the earth, sublunary, tangible, non-miraculous, and perishable.

So can we do better? In fact the concept is difficult to define explicitly, ubiquitous as it is. One might even be tempted to wax family resemblance about it. But I think two points are relatively clear: (a) nature is not supernatural and (b) nature is not fictional. As to (b), fictional worlds don’t belong to the realm of nature: for it is at least a necessary condition of being in nature that the thing in question exists. Horses are in nature but unicorns are out, Shakespeare is in but Hamlet is out. You can’t be a part of nature unless you are real. Of course fiction itself can be part of nature—written texts, oral traditions, inner stories, dreams—but not the things fiction talks about. Putting these two points together, then, we can say that nature is what is real and not supernatural: intuitively, it is what exists here, in this world with us, alongside animals, plants, and rocks. It is not otherworldly or purely imaginary. But this still leaves a lot of latitude and unclarity. Are laws of nature part of nature (e.g. the law of entropy)? What if there is a soul in man and a vital spirit in animals? What if atoms don’t really exist? To these questions I think we should answer as follows. Laws of nature are part of nature, since they are inseparable from it, simply being very general. Even if humans and animals contain a part that is not of nature, they contain parts that are, so they do belong to nature (as well as to something outside of it perhaps). If indeed atoms are fictions, then they do not qualify as inhabitants of nature, since what is fictional is not part of nature. This third point should be emphasized: fictionalism about a class of entities is incompatible with counting them as parts of nature. According to Berkeley, material objects are not a part of nature, since matter is a philosopher’s fiction (though not tables, chairs, etc.). According to positivism, the unobservable entities of physics are “logical fictions” that don’t really exist, so they are not elements of nature. Nature might be composed solely of mental entities with nothing “physical” at all; nature is not by definition coterminous with the physical (whatever exactly that word means). Maybe nature consists entirely of consciousness in the manner of panpsychism. This is a matter of what your metaphysics happens to be not of the very meaning of “nature”. In Berkeley’s system nature consists of ideas in the minds of finite spirits and in the mind of the infinite spirit, with matter deemed fictional. For a materialist nature consists of matter as described by physics, while anything not of this kind lies outside of nature, possibly in an immaterial realm. The concept of nature is strictly neutral between these possibilities. That is why I defined it as what is non-fictional and non-supernatural.

The question that particularly interests me once we have these definitional issues out of the way is this: do logic and mathematics (and also ethics) lie within nature or outside of nature? I have never seen this question discussed, but I think most philosophers would be inclined not to include these domains within nature: for they are too abstract and ethereal (“non-empirical”) to belong with animals, plants, rocks, and landscapes (or even human organisms). Ethics, in particular, is not part of nature, being steeped in things called norms—you can’t derive an “ought” from an “is”, and obligations are not natural entities like hearts, livers, and atoms. Now this decision might be grounded in fictionalism: if you believe that logic, mathematics, and ethics are all about fictional entities, then you won’t be inclined to include them in an inventory of the contents of nature. Nature abhors the non-existent. But that is not the majority view—so the question is where these areas fall according to other views. If we adopt subjectivism or psychologism about logic, mathematics, and ethics, then we assimilate them to the psychological—and then they belong to nature along with other psychological realities. Mathematics becomes a human creation, an artifact of sorts, and so falls within nature alongside other human artifacts, material and mental; and similarly for logic and ethics. The tough case is realism in these three areas: does Platonism or moral realism exclude mathematics, logic, and ethics from nature? I find myself inclined to dispute this—I tend to suppose that numbers and values are part of nature. I already think that nature includes human creations, including art, science, politics, and philosophy—these are all parts of nature, as that notion is properly understood—but I also think that other realities belong there too. They don’t belong with the supernatural (if such there be), despite their distinctive character; they belong with the rest of nature. They are part of what exists without any supernatural backing or miraculous infusion. Nature is what is real and not supernatural—and this description applies to logic, mathematics, and ethics (understood realistically). So the concept of nature has nothing intrinsically to do with the physical (again, whatever that means), nor indeed with the psychological—it includes even what has traditionally been regarded as “non-empirical” (a priori). Norms and numbers are thus as much part of nature as wings and mountains.[2] Where else would you locate them? Not in the fictional world (if you reject fictionalism) and not in the divine world (if you believe in such a thing), so nature seems the natural place to locate them. Why not locate them there—isn’t it just a prejudice to keep them outside of what we call nature? After all, they are closely intertwined with things already admitted to nature—the world of physics, the process of reasoning, and human action—so why insist on extruding them from the world of nature? Why try to make another world for them to live in? If this requires an expansion of the usual assumed extension of the concept, then so be it—we need to expand well beyond the dictionary definitions anyway. Hasn’t human thought already expanded the concept of nature well beyond its initial range by extending it to human and animal minds, so why stop at the logical, mathematical, and ethical? Let them in, you will feel better for it. For the notion of nature has acquired a strongly honorific connotation: it is good to be part of nature—a member of the naturalist’s club—and vaguely disreputable to linger at the gates unable to gain entrance. We need to be more inclusive with the concept of nature, less snooty and hidebound. So I suggest welcoming logic, mathematics, and ethics into the fold—they too can be proud members of the Nature Club (with all the perks attaching). We needn’t refashion them in order to make them eligible; they can come as they are, in all their glorious singularity. You can be as Platonist as you like and still be greeted as a fully paid up member. We can’t let you into the Nature Club if you are a figment or a deity—we have to keep up standards—but logic, mathematics, and ethics are neither, so they can be happily admitted. The expanding circle includes them without strain or solecism.[3] Mother nature has a wide embrace. Perhaps indeed with the passage of time these new members might be taken as exemplars of their class, not only members in good standing but respected and senior representatives of the Nature Club. They might be listed first in the roster of honorable members. Wouldn’t it be splendid if ethics were to become President of the Society of Nature? In the book of nature ethical norms might stand out for their authenticity, their natural claim to the title. If you want to know what nature is, you need look no further than ethics—though other items no doubt belong to nature too (e.g. atoms and squirrels).

There are other terms that vie with “nature” for its inclusive exclusiveness such as “the universe”, “the cosmos”, “Creation”, “the world”, “reality”. To belong to the extension of these terms is a mark of distinction, distancing you from the merely fictional and (dubiously) divine. But we can hope to bring logic, mathematics, and ethics under such umbrella terms along with “nature”, thereby securing them ontological respectability. Norms and numbers are thus constituents of the universe, elements of Creation, inhabitants of the cosmos, creatures of the actual world, as real as anything—yet they are what they are and not another thing. They shouldn’t be left out in the wilderness, shunned even by the fictional and supernatural; they should be accepted as bona fide parts of nature. Let’s not multiply worlds unnecessarily. If it turns out that there is no supernatural world, then there will only be the natural world left (fictional worlds not being real), and that world is capacious enough to include those hitherto excluded members.[4]

 

Colin M

[1] How the editors of the OED would define “physical” in this context is left unclear, and the difficulties are notorious. Is gravity physical (Newton declared it “occult”)? What about parental behavior in animals? Perhaps they merely mean “non-psychological” (not that the concept of the psychological is free of difficulty).

[2] What applies to numbers applies equally (if not more so) to geometric forms: they too belong to nature, as do space and time. On the other hand, in addition to fictional worlds, nonsense worlds also fail to belong to nature: mome raths and borogroves are not parts of nature, even if non-fictionally meant. Are merely possible worlds part of nature? That’s a tough one, which I leave for homework.

[3] There seems to be a natural (though regrettable) human tendency to restrict honorific concepts more narrowly than is reasonable—witness the concepts person, right, true, physical, reason, rational, and others. The concept of nature belongs to this list: people have a tendency to restrict it to certain preferred examples or exaggerate certain alleged paradigms (mountains, rivers, pretty birds). When people say they are “nature lovers” this is primarily what they have in mind, so that mathematics, logic, and ethics don’t get a look in. Truly enlightened nature lovers, however, adopt a more inclusive stance.

[4] Is philosophical ethics part of nature? Is moral realism as a theory part of nature? Is Platonism as a doctrine part of nature? The answer to all three questions is yes, since they are aspects or expressions of human nature (language and belief being part of human nature). Secularism leads naturally to the hegemony of nature. The less real the supernatural seems to you the less likely you will be to compare exceptional cases to it; thus nature swallows up the real in proportion as it replaces the supernatural. In the days when the supernatural seemed everywhere it was easy to assign mathematics, logic, and ethics to a place at least adjacent to the supernatural realm; but once that world was eclipsed these areas needed a new home–and nature seems the natural place to put them. Reality thus merges with nature in a secular age.

Share

General Reactivity Theory

 

 

General Reactivity Theory

 

Consider the simple reflex: the blink reflex or the patellar reflex, for example. There is a stimulus and a response: the stimulus is an impinging physical event and the response is a movement of the body. The stimulus elicits the response without any psychological intermediary; the reflex arc exists outside of consciousness and will, automatically, inexorably. It is a case of straight physical causation. But it is not quite as simple as it may appear, since more structure must be postulated than mere stimulus and response (plus linking pathways). First, the stimulus must be detected and recognized for what it is, if not by the person then at least by the person’s nervous system—there needs to be a stimulus receptor. Second, the response is not produced purely by the physical properties of the stimulus but by a suitable response generator—a mechanism for triggering an appropriate movement of the body. It would be no use if a tap on the knee caused the eyelid to close or if an incoming missile heading towards the eye caused the knee to shoot up! The generator must deliver a response that corresponds to the stimulus; nothing about the physics of the stimulus alone determines how the body will react. So the twofold stimulus-response structure is really a fourfold stimulus-receptor-generator-response structure. And note that the extra two ingredients are more internal to the organism than the stimulus and response as such, and they are more information involving. Furthermore, it would be wrong to characterize the stimulus-response nexus as merely a cause-effect nexus, as if there was nothing special about stimuli eliciting responses compared to physical causes bringing about physical effects. Lightning hitting a tree and scorching it is not an instance of stimulus and response, as falling to earth is not a response to the stimulus of gravity (the motion of the planets is not a response to the stimuli supplied by the sun). The stimulus-response relation is a special type of causal relation, if it is a causal relation at all. The obvious point is that it is purposive—a matter of design, adaptive and teleological. The response reflects the needs of the organism and contributes to its survival. We only call an event a stimulus if it affects organisms in certain ways not because of its physical parameters. And what counts as a stimulus for one type of organism may not count as a stimulus for another type, depending upon its receptivity and responsiveness (e.g. sounds that are too high for humans to hear or light that lies outside the humanly visible spectrum). The concepts of stimulus and response are proprietary to living systems and involve teleological notions. A stimulus is not just any old cause and a response is not just any old effect. The OED puts it nicely: a stimulus is defined as “a thing that evokes a specific functional reaction in an organ or tissue” (deriving from the Latin “goad, spur, incentive”). In short, it is a biological notion. The word “respond” is also defined by the OED in loaded terms: “to say or do something as a reply or as a reaction”. Correspondingly, a stimulus is said to elicit a response not merely to bring it about, as a response is a reaction to a stimulus not merely a consequence of it. These are all biologically loaded notions by no means equivalent to physical concepts. Living organisms are the proper subjects of these notions, and they purport to describe the specific nature of such entities. Even the simplest reflex is conceptually rich in the way outlined.[1]

The question I am concerned with is whether this network of notions has a wider application in describing the operations of mind. To put it with maximum bluntness: is the mind a stimulus-response system? I shall suggest that it is, so the simple reflex can act as a model for the general character of the mind. I intend this to sound outrageous, given the uses to which the notions of stimulus and response have been put, but on reflection we may have thrown the S-R baby out with the behaviorist bathwater. This may still be a useful and accurate way to talk, even though it has been multiply abused. The first point to note is that it has nothing essentially to do with any attraction to materialism or behaviorism: stimuli and responses may be psychological in nature, irreducibly so. Nor need they be observable or measurable or public or experimentally usable. For instance, we may reasonably say that pain is a response to a harmful stimulus, even if the pain is unobservable and non-physical–even if it is entirely immaterial. The point is that the sensation is automatically elicited by the stimulus—the two things stand in the S-R relationship. Likewise, a sensation can act as a stimulus eliciting a response, as when a pain elicits a cry or an itch elicits scratching. In fact, it is entirely appropriate to describe perception in general as a reflexive stimulus-response system: the impinging stimulus, say irradiation of the retina, elicits a sensory response, say seeing a red object. This is a “functional reaction” to an incoming stimulus—and the physical impingement acts as a stimulus for the organism in question. The perceptual response is an adaptive reaction to the organism’s environment, entirely analogous to the blink reflex or the perspiration reflex or the flinching reflex or the disgust reflex or the salivation reflex. Seeing an object is a stimulus-response linkage. And let it be noted that the extra layers of receptor and generator are present here too: the senses need receptors to register the stimulus and a mechanism to generate the percept that results. There is nothing behaviorist about this in the classic sense. It is unapologetically mentalist.

The interesting question is how far the S-R model can be extended, and here some controversy can be expected. Let’s consider belief, emotion, and intentional action. Belief can be viewed as a response to the stimulus afforded by perception: you see a red object and respond by forming the belief that there is a red object there. We need not suppose that forming a belief is an action—it is not—but we can suppose that it is a reaction to a percept; not all reactions are volitional. The cognitive system is set up in such a way that beliefs are triggered by perceptions: beliefs are “functional reactions” to the stimuli afforded by the perceptual apparatus. The seeing elicits the believing. In the case of beliefs that arise by inference we can say that the conclusion belief is a response to the premise beliefs: beliefs can function as stimuli that evoke other beliefs as response. Again, this is adaptive and functional—as is very evident for animals solving problems by reasoning. The premise beliefs don’t just cause the conclusion belief; they act as stimuli that elicit that belief—that is, they are part of a functional biological system. In some cases the inference pattern may be instinctual, in others learned, but it is an S-R arrangement in either case. In the case of emotions, the response is triggered by an external stimulus, say a threat; and the response may be rapid, automatic, and unavoidable (again think of animals). The emotion is an evoked response, also functional. Fight or flight responses are mediated by emotion, and the emotion is as much a response as the behavior that goes with it. We ask, “What was your reaction?” when hearing about some untoward experience of a friend, and expect to be told what emotions were evoked. This is just stimulus and response, though of a more complex and mediated nature than simpler cases. In the case intentional action we can introduce need and desire as stimuli: the organism is prodded to act (goaded or spurred) by its internal appetitive states, say hunger or amorous desire. The desires act as stimuli to the volitional (motor) system and they serve to elicit appropriate actions.[2] These stimuli can vary in intensity as perceptual stimuli can; the response evoked is thereby modified and enhanced. Logically, the case is just like other S-R linkages—biologically functional causal patterns. And again it will be necessary to postulate receptors and generators as well as the stimuli and responses themselves—all the machinery of response elicitation.

There is thus a recurring pattern in the functional architecture of the mind: stimulus-response relations elaborately organized, varying from simple to complex. There are chains of such patterns, as a response becomes a stimulus to a further response, which in turn becomes a stimulus.[3] We must purge ourselves of old associations of these notions deriving from an antiquated behaviorism. No doubt the early behaviorists were influenced by nineteenth-century biology, in which the idea of biological responsiveness played an important role—as in early studies of the nerve impulse. Neurons were discovered to work by means of stimulus and response, as one neuron abuts another and evokes action potentials in it. Tissue in general was described as “irritable”—reactive, alive, lively. The behaviorists then took this useful way of thinking and converted it into a positivistic picture of public bodily events. But the conceptual apparatus itself is quite independent of this move, merely recording the biological fact of one thing eliciting another in a functional manner. The whole organism is composed of reactive organs that respond in certain ways when stimulated in certain ways, the brain being no exception. It is then a short step to regarding the mind, itself a biological organ, as likewise an array of S-R linkages. This enables us to take the mind down from pre-Darwinian obscurity and religious obscurantism (the soul etc.) and locate it within the biological organism.[4] We thus obtain a healthy biologism about the mind not a doctrinaire behaviorism (unless we choose to liberate behaviorism from its materialist and positivist dogmas by opting for internalbehaviorism). It is true that the notions of stimulus and response must be extended considerably from the case of the simple reflex—in particular, in relation to the automatic character of such reflexes—but there is no logical bar to accepting that some S-R connections may be less fixed and invariable than others. There can be probabilistic response elicitation, even resistance to some types of potential stimuli (e.g. strong but unwelcome desires). We can allow that theory formation in the sciences counts as an advanced form of response elicitation by the stimuli offered by the evidence, odd as it may sound to talk this way. The S-R schema does not stop at the higher cognitive functions when suitably generalized. In addition, it should not be supposed that the mind admits of no other useful mode of description: we can certainly acknowledge that there are mental competences, mental faculties, and mental qualities. It is just that mental operations have a stimulus-response structure: mental transitions are always governed by S-R logic. Even the humble patellar reflex needs its underlying machinery—competence, if you like—and linguistic stimulus-and-response undeniably relies on an underlying structural competence. The same is true of organs of the body: each needs its specific architecture and cellular substrate in order to permit the stimulus-response connections in which it engages. But when we describe an organism as a situated living thing, acting and reacting in the world, transitioning from one state to another, we need the conceptual apparatus of stimulus and response. All I have done here is suggest applying it more widely than is customary. For it provides a nice unifying framework for thinking about the mind, shorn of all connection with behaviorism, conditioning learning theory, anti-nativist empiricism, and anti-cognitive bias. Cognitive science turns out to be S-R psychology after all, when properly understood.[5]

 

C

[1] The stimulus-response concept is not the same as the input-output concept. The latter concept is more general, applying to non-living systems as well as living ones, and it lacks the teleological connotations of the former concept. It derives more from computer technology than traditional biology.

[2] There is nothing contrary to freedom in this fact, given a good analysis of freedom, but I won’t go into the question of free will now.

[3] In psychology it is customary to distinguish the proximal and the distal stimulus, e.g. the light proximally impinging on the retina and the distant object sending out that light. The same kind of distinction can be applied to the full range of S-R relations: the proximal stimulus to a belief might be a conscious perceptual state while the distal stimulus is a pattern of light on the retina. Also we can define the same kind of distinction for responses: the proximal response for a percept might be a belief while the distal response is an utterance expressing that belief. The same distinction can be applied to desires and emotions, where there can be closer and more remote stimuli and responses.

[4] I don’t mean to suggest that all mystery is removed thereby, only that the mind is set in its proper place as a natural attribute of organisms. We get a conceptual continuity between the various aspects of organisms.

[5] This enables us to inject a welcome dose of biology into cognitive science, which tends to view the mind as inherently divorced from the processes of life, like a computing machine. It is not as if theorists have adopted a computer model for the activities of the body. Cognitive science has in effect erected a new form of dualism, which the S-R schema helps us transcend.

Share