On Cancelling

On Cancelling

Here is a thought experiment for you. Suppose your top ten philosophers had all been cancelled: removed from pedagogical employment, prevented from publishing, and generally shunned. This possibility could cover the philosophers of the last hundred years or of all time. Suppose that their thoughts had therefore never seen the light of day. Suppose too that no one else had ever had them. No Plato, no Socrates (who was rather drastically cancelled), no Aristotle, Descartes, Locke, Leibniz, Berkeley, Hume, and Kant (you can make your own list). Philosophy would have had a very different history. These luminaries might or might not have been guilty of this or that (e.g., blasphemous speech); the thought experiment is more piquant if we suppose they were not but that the spirit of the times required it. Do you think that would be a bad thing? Do you think an effort should have been made to mitigate the effects of their cancellation? Suppose they had all gone on to become rich successful men living happy lives, but unable to contribute to philosophy (they clearly had brains). Meanwhile other second-rate individuals formed the philosophical tradition, perhaps those most responsible for doing the cancelling. Nobody worries much about this, however, since the cancelled philosophers never had the chance to do their work and hence never became known as the great thinkers they could have been. Do you think this would be a tragedy for philosophy or just a negligible historical hiccup?

I have existed in a state of professional cancellation for over ten years now. Before that I had normal access to teaching positions, publishers, conferences, professional contacts, and so on. Not anymore. I contributed to philosophy continuously for nearly forty years. If I had been cancelled earlier, that would not have been possible—my teaching, publications, and professional activities would have been cut off even earlier than they actually were. Let’s suppose it happened before 1989 when I published “Can We Solve the Mind-Body Problem?”—that article would never have existed. My other books, articles, reviews, and lectures would never have existed. Do you think that would have been a bad thing or not? Do you think that if it had not happened, I would have continued to produce what I used to produce? Of course I would have. So, many things I would have done I have not been able to do—books written, talks given, students taught. All that has been expunged from intellectual history. That is the cost of cancellation (and it’s not just me). Do you think that is all fine and dandy? Have you been complicit in it? (I hope not.)

As it happens, I have not been completely silenced by cancellation, though certainly much muffled. Because I have this blog. If I did not, none of the ideas contained herein would have reached the minds of interested parties. It is sheer luck that I have this place to make my results public. There has been a concerted campaign to keep me from exercising my normal rights to communicate my ideas to others in the usual ways. I couldhave gone completely silent. I could have decided to admit defeat and simply given up thinking about philosophy and writing it. Then nothing of my thoughts over the last ten years would ever have entered the historical record. Those thoughts would never have existed in all probability. And don’t think I have never considered it—what is in it for me in the time and effort it takes to write these pieces? So, why do I write them? I could be having fun, playing tennis, kite surfing, throwing knives, making music, travelling, reading novels, living the life of Riley. I write them for you—for other people, for posterity, for the good of the human race. That’s why I do it. Fortunately, I am strong-willed and resilient enough to face down the cancellation, to disdain it, to rise above it. This is my gift to the world, my moral duty as a thinker. I believe these papers have intellectual value. I believe people benefit intellectually from reading them. I believe it would be a tragedy if I never bothered to produce them. That may sound immodest to some (“narcissistic”), but it seems to me the simple truth. Before cancellation I was a very successful and esteemed philosopher; that has not changed. If anything, I am a better philosopher than I used to be. I have not allowed my present cancelled status to deflect me from my intellectual calling. True, I write with bitterness in my heart, with anger and disgust, but I still do it. I couldn’t live with myself if I didn’t. I have never felt so altruistic in my life, so philanthropic, so generous (it’s not a particularly good feeling). Not a penny do I make from these writings, these uncounted hours of labor; not a promotion, not a pay increase, not even professional recognition. I do it because it is the best part of me, my God-given talent. I should be thanked for it, but of course stony silence is all I get from the American philosophy profession. Friends appreciate my efforts and so do many readers from across the globe who are not complicit in the cancellation (joyfully reveling in it in fact). I have not allowed the cancellers and their enablers to rob the world of the products of my labor, as I could so easily have done. I have continued to contribute to philosophy, despite the attempts to prevent it, successful as they have undoubtedly been. This is no mere thought experiment.[1]

[1] I could name many names, recount many incidents, indict many conspirators; but I will refrain from doing so. I think the bare facts speak for themselves. I do wonder if the responsible individuals even think about what they have done and are still doing; or is that too difficult? Notice the silence.

Share

On Writing

On Writing

When I was a professor of philosophy working in a philosophy department, I used to jot down notes on ideas I had that seemed promising. I was too busy to write up the ideas properly, so I made the notes as reminders for later work. The exigencies of teaching had priority (a great benefit of AI will be mechanizing essay grading). Sometimes months or even years went by before I could get back to these nascent ideas. Upon retirement (if we can call it that) I had time to revisit some of these old notes and convert them into papers. It wasn’t always easy, memory being what it is. But when I had new ideas, which I often did, I had the unbelievable luxury of being able to write them up straight away. At first, I simply let the resulting essays accumulate on my hard drive, expecting to publish them in some form. But they quickly began to proliferate wildly and publication became more of an issue (other factors were also involved), so I decided to put them on my blog. That way they could get out there instead of just lying around in my house. Since I was writing at the rate of about two papers a week, the passage of time led to the production of a great deal of material. I haven’t counted what I have written in the last ten years but I think it is on the order of around a thousand papers. That adds up to about ten substantial volumes—far too many for a university press. I now just think of my blog as where I publish. A bonus is that I get to write the way I want to write not the way editors want me to write (tediously at best). I’m also glad that my work goes out to the whole world not just to a limited number of English-speaking countries. According to my website (which I don’t run), the countries that visit my blog include the Philippines, Mexico, India, Pakistan, Norway, Sweden, Italy, and Germany—as well as America, Britain, Canada, and Australia (it varies from week to week). It’s like the Top Ten. I never intended to publish my work this way, but that is how it has turned out. What would I have done without the internet? I don’t even think of the journals anymore, or the university presses. It is a welcome freedom. I certainly like what I write more than I used to, because I am freed from academic conventions that impede good writing.

Share

Correlational Semantics

Correlational Semantics

I will describe some possible uses of correlational semantics. I don’t say I subscribe to these uses; I offer them as a gift to those with anti-realist or fictionalist yearnings in certain areas.  It may help ease some discomfort caused by such yearnings. Let’s begin with a relatively simple case: feature-placing sentences like “It’s raining”. The utterance of such a sentence can be true yet contain no reference to the place at which it is raining—for example, London. The sentence is not semantically comparable to “London is rainy”. The word “it” is not a referential singular term denoting London, even though the truth of the utterance depends on the fact that rain is falling in London. There is a correlation between uttering “It’s raining” while in London and the statement “Rain is falling in London”, but nothing in the former denotes London. The former is true in virtue of the latter but it isn’t about what the latter is about in the sense that it contains a term that refers to what “London” refers to: the two sentences don’t mean the same thing even at the level of reference. The phrase “It’s” is a kind of dummy subject expression, not a genuine referential term. Correlation is not same as denotation, even though truth may depend on correlation. We may say that our sample sentence alludes to its correlate (it presupposes a particular place, typically known to the speaker), but it contains no term that denotes that place. We know that it rains at places (where else might it rain?) and we assume that that is what is going on in the present instance, but epistemics is not semantics. We might even say that our sentence connotes a place where it is raining while not denoting any such place.[1] Places belong in its semantic periphery, so to speak. Now consider fictional names like “Hamlet” as it may occur in a sentence like “Hamlet is a prince”. That sentence seems true (Hamlet isn’t a pauper or a porcupine): we can insert it into the formula “s is true if and only if p”. Yet Hamlet does not exist, so (we might say) the sentence ought not to be true. What is it that makes it true? The obvious answer is Shakespeare’s intentions: he decided to create a character named “Hamlet” who was a prince. The name “Hamlet” doesn’t denote Shakespeare or his intentions, but anything true of Hamlet is due to Shakespeare’s intentions. There is semantic correlation but not semantic denotation. And anyone familiar with fictional names understands this kind of dependence-without-denotation; it is part of our linguistic competence. Thus, statements containing fictional names are true in virtue of the author’s intentions, which are correlated with the name; but they don’t refer to such intentions—quite possibly they refer to nothing. The correlation explains the truth of the statement, but there is no denotation relation connecting the two. Truth and denotation come apart; the former doesn’t require the latter. It is therefore possible to maintain both that the statement is true and that it has no reference, because reference comes from elsewhere, in the shape of a correlated statement that does refer. The connoted (not denoted) statement does the truth-conferring work, leaving the original statement to luxuriate in its non-referential indolence. We can combine fiction with truth by availing ourselves of correlational semantics. Otherwise, we are saddled with truth without reference, or no truth at all. Hamlet doesn’t exist, but statements about him can still be true, because “Hamlet” is correlated with Shakespeare’s intentions, which do exist. Thus, anti-realism does not imply lack of truth (or weird kinds of truth). The anti-realist does not have to deny the obvious.

Now that the form and point of correlational semantics is made clear, we can extend it to other areas. Some philosophers (named “positivists”) have maintained that theoretical entities are mere fictions: but then how can statements about them be true? Correlational semantics supplies an answer: these statements are correlated with other statements about existent non-theoretical entities such as experimental results, sense-data, retinal stimulations, or what have you. These entities confer truth even when the statement correlated with them refers to nothing real. Correlation steps in where denotation fails. Statements about electrons, say, can be true even if there are no electrons, because they are made true by non-electrons—non-existence is no bar to truth. We simply detach truth from reference and existence; and, indeed, there is nothing in the concept of truth itself to compel a rigid connection, because truth simply requires correspondence (or the possibility of disquotation) not reference and existence. No sentence is true but reality makes it so, but the sentence need not denote this reality. We can have a correspondence or redundancy theory of truth without building in reference in the sentence declared true; for we can appeal to correlated sentences or states of affairs. The concept of truth itself doesn’t even require a referential structure (grammar). Truth is more general than reference, more capacious. Similarly, it can be maintained that folk psychology refers to nothing real and yet its propositions are true, since they are made true by propositions about the brain.[2] It is true that I believe that snow is white even though there are no such things as beliefs, because that statement is made true by the condition of my brain as it controls my behavior. Thus, one can consistently be a mental eliminativist and also ascribe truth to mental propositions—rather like the anti-realist about fictional characters. For there is something other than mental reality that can confer truth on such propositions; and surely it is true that I believe that snow is white (even though that phrase refers to nothing, according to the eliminativist). Correlational semantics allows you to have it both ways (if both ways appeal to you). Correlation not reference; connotation not denotation.

I suspect the philosophers most well disposed towards correlational semantics will be ethical expressivists. Their official view is that ethical sentences do not report facts, are not true, and cannot be said to be known; there are no values “in the world”. Or better put: since there are no values in the world, ethical utterances cannot be true, even though they appear to be true. Wouldn’t it be better to accept that they are true but that they don’t denote values-in-the-world? That is what correlational semantics allows: expressivism combined with truth. Suppose an experience of pleasure occurs and someone remarks “That’s good”: the expressivist says that no evaluative property is thereby ascribed to the pleasure, since there are none such; instead, the utterance is like an outburst of emotion or an order to act in a certain way. But there is an alternative story: the utterance is true in virtue of the existence of the pleasure but no property is ascribed by the word “good”. The word isn’t even a predicate (in some versions of the doctrine). The sentence is not true in virtue of a denoted evaluative property but in virtue of the correlated pleasure property—correlation not denotation. The world does contain pleasure and it makes such propositions true, but evaluative discourse does not refer to this property. The evaluative force comes from the attitude expressed not from the state of affairs that makes the sentence true. Thus, truth is compatible with expressivism concerning value; we are not forced to reject ethical truth just because ethical propositions don’t refer to ethical properties. Put differently, the truth-makers belong to the supervenience base not the supervening values (they are add-ons derived from human psychology). Again, I am not saying I accept this position, only that it exists in logical space and has attractions for a convinced expressivist, particularly one who refrains from withholding truth-values from ethical statements. Something objective is involved in making ethical statements true, but it isn’t the denotation of ethical words; rather, it is what those words connote in the way of correlation. We know these correlations, so we accept the truth of what is said; but this doesn’t entail that the correlated facts are denoted by moral terms—they are not. Accordingly, we get to be moral anti-realists and accept objective ethical truth-makers. Everyone is happy (well, not everyone). Some words don’t denote anything real, but that doesn’t stop them appearing in true statements, thanks to suitable correlated realities. Only referentially empty sentences with no correlates will turn out not to be true or be incapable of truth, such as ungrammatical or meaningless types of sentences. The sentence jumble “It thing number gone” is not true and has no true correlate; nor does “All mimsy were the borogroves”; nor “Colorless green ideas sleep furiously”. But it is possible for a sentence to contain empty terms (no denotation) and the sentence be perfectly true; it has, we might say, parasitic truth. Correlational semantics is designed to accommodate these cases. We must give up the dogma of denotational truth.[3]

[1] See my “On Denoting and Connoting”.

[2] See my “Semantical Considerations on Mental Language” and “Ontology of Mind”.

[3] There are other reasons we might want to give up the dogma of denotational truth: a devotion to hard redundancy theories, adherence to coherence or pragmatic theories, skepticism about the notion of reference (itself having several sources). The point I would emphasize is that nothing in the concept of truth analytically implies that true sentences or propositions must have a referential semantics: truth does not require referential relations between sentence parts and whatever in the world makes the sentence true. In principle, a sentence could have no such structure and still be true. Some linguists and others have toyed with the idea of a pre-referential level of meaning in the child’s understanding of language (“RED!”, “COW!”, “MAMA!”); such a level would not preclude application of the concept of truth.

Share

Are We Animals?

Are We Animals?

I am interested in the concept animal, its analysis and role in our thinking and acting. I am also interested in the use of the word “animal”, its denotation, connotation, conversational implicatures, psychology, and sociology. These interests have a bearing on the ethical treatment of animals and on the nature of human intelligence. For most of human history it would have been denied that we are animals (“beasts”), mainly for religious reasons, but Darwin initiated a movement that denies this denial. For all intents and purpose, we are rightly classified as animals, though we do not always talk that way (ordinary language has not kept up with biology). The reasons for this are threefold: we are one species among others; we evolved from animals; we are similar to animals physiologically. How could these things be true and we not be animals, though no doubt exceptional animals. We are not plants and we are not gods; we are animals like other animals. That is our similarity-class. That is our taxonomic category. I take it this would now be generally accepted, if not warmly welcomed. But it would be fair to say that it has not completely sunk in; the culture has not fully absorbed it. Consider the following sentences: “I am an animal”, “You are an animal”, “She is an animal”, “Queen Elizabeth II was an animal”, “Jesus Christ was an animal”. These may all be literally true, but their connotations and implicatures prohibit their utterance in normal circumstances—they may be regarded as insulting, impolite, blasphemous. We don’t like the sound of them. They suggest dirty habits, aggressiveness, hairiness, lack of intelligence. They sound degrading. Why? I think it is because we have three main characteristics that set us apart from other animals: we live in houses, we wear clothes, and we speak. We are not unclothed, live in the wild, and bereft of speech. To this the obvious reply is that we are animals distinguished by the possession of these traits—we are exceptional animals, but still animals. Whales are also exceptional animals, but still animals. If they think of themselves in relation to other species (and I don’t doubt that they do), they probably regard themselves as a cut above, not as other species, not just animals. They don’t like to be classified as belonging to the same group as those creatures (“beasts”). They don’t care for the association—just as we don’t. We don’t like the label. But both species have to admit that their kinship with other creatures justifies using the same term to cover them. We are animals reluctant to be called “animals”.

The OED provides an instructive, if not entirely satisfactory, definition of “animal” that is unusually long: “a living organism which is typically distinguished from a plant by feeding on organic matter, having specialized sense organs and nervous system, and being able to move about and to respond rapidly to stimuli”, adding the codicil “a mammal, as opposed to a bird, reptile, fish, or insect”. The word “typically” is inserted to prevent counterexamples involving slow animals and fast plants, or the possibility of plants with eyes or ears, or insect-eating plants. Also, in their zeal to distinguish animals from plants the authors fail to provide a sufficient condition that distinguishes animals from gods or other supernatural beings (surely gods can eat, have sense organs, and can respond to stimuli). The codicil is interesting but puzzling: are the authors supposing that only mammals are animals? That is not zoologically orthodox and I would say plainly false, but it is not merely bizarre, because we don’t tend to use the word “animal” in application to these zoological groups. Why is this? I think there are two sorts of reasons: reptiles, fish, and insects are coldblooded; and birds resemble us in important respects, at least so far as folk zoology is concerned. Being coldblooded sets some animals apart from other warm-blooded animals, so that we need a subdivision in the total class of animals; we thus avoid calling the coldblooded animals by that name, while not denying that they are animals. In the case of birds, we recognize three features of them that bring them close to us: they build nests and live in them; they sing; and they have attractive plumage, rather like clothes. It is therefore felt to be demeaning to call them animals, as it is felt to be similarly demeaning to us. We are both animals with a difference, superior to other animals, supposedly (we are very fond of birds).

It is the human body that invites the appellation “animal”—its similarity to animal bodies generally. Our anatomy and physiology resemble those of animals already so called. Clearly, our bodies derive from earlier animal bodies; we might well be prepared to accept that we have an “animal body”, even before Darwin came along (but a non-animal soul). Hence the body is deemed a source of degradation, shame, mortality. It is not the human mind that encourages calling us animals; it isn’t flamboyantly animal in nature. If we didn’t have animal bodies, but supernatural or robot bodies, we would not describe ourselves as animals like other animals. Is it correct to say that we have animal minds? That is not such an easy question: for our minds are not indelibly imprinted with animal characteristics. On the one hand, we have minds far superior to any animal mind in certain respects: art, science, technology, music, literature, courtly love. On the other hand, our minds are in part clearly shaped by our animal body: hunger, thirst, fear, pain, lust. What an animal wants and feels is a function of its body type. Perhaps our psychological kinship with other animals might tip us off to an affinity with other animals and suggest continuity with them, but it is not so salient as the body. It would not fly in the face of the facts to say that we don’t have an animal mind, though we do have an animal body; at most our mind is partly an animal mind (but completely an animal body). This would justify the protest that I am not (completely) an animal, because my mind transcends anything in the animal world (my body, however, is stuck in that world). The correct form of statement would then be “The Queen’s body is wholly animal but her mind is only partly animal”. Does that sound a bit less discourteous?

How does all this bear on the two questions I mentioned at the beginning? First, it is difficult to defend speciesism once we humans are declared animals too; there is then no sharp moral line between us and the animals we mistreat. It doesn’t sound terribly convincing to say that we have a right to abuse other animals but not the animals we are. Why should we be treated with kid gloves if we can abuse and exploit our animal kin? “All animals are created equal” should be the maxim of the day. Mere species difference shouldn’t trump animal continuity. Second, full recognition that we are animals too, derived from other animals, should undermine claims of unlimited intellectual capacity: for animals are not generally omniscient. True, our minds are superior to other animal minds (in certain respects) but they are still the minds of an animal. We should therefore expect cognitive limits. The general lesson I would urge is that the word “animal” must cease to have negative connotations, so that no unease is produced by calling queens (and kings) animals. We should be able to say “Your royal animal highness” and not be accused of offenses against the monarchy.[1]

[1] A little anecdote may shed light on the origins of this paper. The other day I was feeding my pet tortoise and I noticed its tongue as it ate. It was small and pink, remarkably like a human tongue. I reflected: I am an animal too, just like you. I don’t think this is an easy thought to have, given the chasm we tend to set up. I wonder if any animal really thinks of itself as an animal. It doesn’t seem like something to be proud of (unlike species identity: does any animal feel itself to belong to an inferior species?). I myself am quite happy to call my tortoise my biological brother.

Share

Ontology of Mind

Ontology of Mind

We have no clear ontology of the mental. We have no good way of talking generally about the mind. This has always been a source of awkwardness and embarrassment. We philosophers talk routinely of mental states, attributes, properties, traits, events, processes, and entities; but we volunteer very little in the way of justification or explication of these terms of art.[1] Some there are who deliberately eschew such metaphysical-sounding locutions, preferring instead to tread the safer terrain of language—they stick to talk of mental predicates or terms or ascriptions. The suspicion lurks that we have borrowed these ontological terms from elsewhere (physics, chemistry, biology) and extended them to the mind without asking too many questions. How else are we to describe the components of the mind (and notice that bit of borrowing)? This suspicion is amply justified, as a trip to the dictionary will confirm (amazing how philosophers ignore the dictionary, as if they have something to fear from it—and they do). The word used most commonly by the most cautious philosophers is “attribute”: we are said to have various “mental attributes” such as believing that it’s raining or desiring a piece of cake or feeling a pain in the foot. But what is meant by “attribute”? The OED supplies “a quality or feature regarded as characteristic or inherent”. Thus: height, weight, skin color, intelligence, patience, sense of humor, and so on. The quality must be somewhat enduring, defining, characteristic; it can’t be transitory or extrinsic or untypical. So, the following are not attributes of a person: geographical location, occupation, history, birthplace. Such things are not characteristic of a person or inherent in him. Of course, they are true of the person, but their truth does not arise from attributes of the person. Still less can it be said that what someone believes is an attribute of that person, or desires at a particular time, or feels in his foot. It would be bizarre to say “One of John’s attributes, in addition to his great height and intelligence, is that he currently believes it’s raining”. Even the hardened philosophical theorist might blanch at that piece verbal slippage (or garbage). Matters don’t improve if we switch to “property”: the OED pithily gives us, as number 4 in its list of definitions (after stuff about land and buildings), “characteristic of something”. If we look up “characteristic”, we get “a feature or quality typical of a person, place, or thing”. But, of course, an individual’s belief that it’s raining is not typical of that individual, except in certain imaginable cases. Animal species have characteristics (typical traits), but it is a misuse of language to say that what someone believes or desires or feels is a characteristic in that sense. What is called a mental state is not a characteristic of the individual in that state, though it may be quite true that he or she is in that state. The truth of a mental statement does not require that the verb signifies an attribute or property or characteristic of the person spoken about. In this respect mental statements are like existential statements: these too can be true without supposing that existence is an attribute or property or characteristic of the thing in question (the same goes for “true”). Not every fact is a “subject-attribute” fact. The philosopher is evidently stretching the ordinary meaning of “attribute” in an effort to find a word that covers mental…mental what? That is the problem I am identifying.

Much the same difficulty attaches to using “event” or “process” in application to the mind: are we engaging in illicit conceptual overreach? For “event” the OED gives us “a thing that takes place—a public or social occasion”. Is a thought at a given time really “a thing that takes place”—let alone something public and social? It is a thought all right, and it occurs in time, but is it really an event—is there anything eventful about it? Things change in the mind, it seems safe to say, but that doesn’t imply that the mind houses things called mental events (analogous to weddings and funerals). The word “process” is defined as “a series of actions or steps towards achieving a particular end”: that clearly does not apply to what happens in the mind when (say) you recall something or see something. We are suffering from linguistic and conceptual creep. It isn’t that we already talk that way about the mind—which is why philosophers (and psychologists) feel the need to justify or excuse such verbal innovations. The plain fact is that they are transferring words from their original home into alien and inhospitable territory (a typical philosopher’s vice, as Wittgenstein pointed out). They do this because they have no alternative, but they conceal from themselves the distance they are traveling and take refuge in metaphor. In this movement of thought they are abetted by standard first-order logic in which the subject-predicate form is sanctified and glorified (as in the recurrent formula “Fx”). We thus see everything through the lens of an object possessing an attribute, using certain paradigms as anchoring models.[2]

What is the philosophical significance of this verbal and conceptual waywardness? Does it show that the mind lacks an ontology altogether—that it lacks being? No: it shows only that we can’t, or at least haven’t, managed to formulate the correct ontological categories. We are like people who cleave to the notion that existence is a first-order property, despite their recognition that this notion seems fishy or plainly false, because they lack the conceptual resources with which to formulate the correct theory that existence is a second-order property of a propositional function (as we may suppose for the sake of argument). Our ontological categories have been forged on other territory, and we are trying to make them fit this new domain, which they fail to do, save metaphorically. In this intellectual environment it is predictable that certain myths will flourish—attempts to impose the familiar on the unfamiliar. We strive (vainly) to think of the mind in such a way that our prior ontological categories fit the phenomena. Thus, we have the theater myth according to which the mind contains simulacra of real people and things on which the introspective eye gazes; these are conceived as entities that possess attributes. Then there is the museum myth: the contents of the mind are like exhibits in a museum, gleaming under glass. These myths are generally regarded as such, but we also have myths that masquerade as fact (and may indeed be based on fact): brain myths, behavior myths, pictorial myths, language myths, qualia myths. In each case we have a doctrine that preserves a semblance of the domains in which the subject-attribute model works best—the brain, the body, pictures, linguistic items, atoms of pure consciousness (vaguely modeled on dabs of paint perhaps, or snatches of music). Beliefs, desires, and sensations are thus represented as properties, characteristics, or attributes. But these myths fail to support the ontological burden placed on them: they don’t justify the category terms used in their wake. They don’t provide a clear sense for the loose talk of attributes, properties, characteristics, events, and processes. This means that we lack a conceptual apparatus suitable for talking about the mind in general terms; we just have our specific commonsense talk of beliefs and desires and sensations. We can’t classify these as “attributes”, “properties”, etc. We therefore seem stranded in conceptual limbo, unable to produce what we need. This makes it difficult to formulate philosophical issues about the mind in an accurate and illuminating way.[3]

[1] I have always felt a guilty intellectual pang when availing myself of these common philosophical habits. But how else was I to talk?

[2] Intimations of Ryle and Wittgenstein (and no doubt others) would not be amiss here: the idea that the mind must be understood via the same (onto)logical structure as we understand physical objects (of ordinary sorts) is a well-known target of theirs. It is notable how little they have to say, however, of a positive nature about the true character of the mental if the subject-attribute model is discarded. I am more inclined to sense cognitive limitation (mystery) here. Everything is not open to view.

[3] I have not discussed the question of the subject of supposed mental attributes. Clearly, if there are no mental attributes of the kind usually alleged, then there is no need for a subject of those attributes. To what extent has the hunt for a mental subject been motivated by the conception of such attributes? If that conception is cast aside, then there is not this reason for positing a mental subject, though there may be other reasons.

Share

Semantical Considerations on Mental Language

Semantical Considerations on Mental Language

I am going to expound a view of the mind (and hence the mind-body problem) that I am disinclined to accept. Still, the view deserves careful articulation and may contain elements of truth; it should be added to the menu of options. And it is agreeably radical. It begins with considerations on the workings of mental language and takes off from a recognizable tradition. The tradition I have in mind contends that various bits of language have a misleading form: they seem one way but are really another way. Logical (semantic) form does not mirror grammatical (syntactic) form; and we tend to be mesmerized by the latter and oblivious to the former. We have to fight our tendency to take surface form too seriously. Thus: quantifier words are not singular terms but second-order functions (Frege); the word “exists” is not an ordinary predicate but concerns a propositional function (Russell); definite descriptions are not referring expressions but should be analyzed by means of quantifiers (Russell); the word “true” does not denote a property but acts as a redundant device to avoid repetition; the word “good” does not denote a simple quality but is used to express emotion (Ayer) or to make a prescription (Hare); the words “I promise” are not used to make a statement but to perform an action (Austin); words for colors don’t stand for intrinsic qualities of objects but for propensities to elicit experiences (many people); arithmetical sentences look factual but are really fictional (mathematical fictionalists); mental words seem to denote inner states but are really ways of summarizing overt behavior (Ryle); words in general seem to us to be names but they are really of many different kinds (Wittgenstein). The broad thrust of these positions is that language is not homogeneous and what seems like the name of an attribute may not be. At the extreme such a view insists that language is never denotational and properties are a myth; all there is to language is use, inference, grammar, linguistic practice. The view I am going to discuss claims that mental language falls into this category: it doesn’t consist of symbols standing for properties or attributes aptly called “mental” (simple, sui generis, distinct and apart) but rather has a different kind of semantics altogether (what this is we will come to). In particular, it borrows from the redundancy theory of truth and the non-cognitivist view of ethics. It treats mental language as strictly dispensable and non-denoting (“non-factual”). It is not about anything real, though it has practical value and is not entirely fictional. The brain comes into it.

It will be best if I just plunge in and come right out with it. Mental language is strictly redundant because the brain contains all that is necessary to record the facts in question: once you have stated all the facts about the brain, you have said all there is to say about the mind. We must immediately add that facts about the brain are not limited to currently known facts, or even conjectured or imagined facts. The basic idea is that there is no further substance over and above the brain whose distinctive properties constitute the mind; there is just the brain and its properties. There are no additional mental facts. Once you know all about the brain you know all about the mind. This knowledge may or may not include concepts we currently apply to the mind. Similarly, there are no facts about truth over and above ordinary facts: there is no more to a proposition being true than what is contained in the proposition—you don’t add anything to a proposition by saying it is true. In principle, the word “true” is eliminable; we use it now because of certain limitations on our knowledge (as in “Whatever the pope says is true”). We don’t as things stand know much about the brain, especially what is going on in it at any given moment, so we resort to mental language to fill the gap; but really, there is nothing happening in a person’s mind other than what is going on in their brain. How could there be? The brain is all there is mind-wise; there is no semi-detached mental substance. Accordingly, our usual talk of mind is strictly redundant, though practically necessary; it is really an indirect way of alluding to the brain. Instead of saying someone has a certain brain state right now, we say they have a belief or desire; but there is nothing going on except the brain state, which is hidden from us. Our mental words don’t denote or describe this brain state, though they may be said to allude to it, so they don’t introduce any real property that people possess—as the word “true” denotes no real property of things. We may be under the illusion that these words denote real properties, but careful analysis reveals that they do not. There is no additional fact for them to express.

So, how should we interpret speech acts containing mental words?  We might venture an expressivist semantics for mental language analogous to an expressivist view of truth language: mental talk expresses our attitudes towards people and animals without attributing any properties to them, as our truth talk merely expresses our attitudes of commendation or fellow feeling. Less drastically, we might follow Ryle in thinking of mental discourse as the issuing of “inference tickets” conceived as permissions to draw various inferences about the individual so described. In the terms of more recent philosophy, we might speak of “inferential semantics” as opposed to “truth-conditional semantics”: these words aren’t about anything (objects, properties) but they enable us to make predictions and offer explanations. I will propose something more novel and geared to the particular case we are considering: what I will call “correlational semantics” as opposed to “denotational semantics”. Correlated with a given mental state (so called) we have a brain state, which is also correlated with a use of a mental word: if the word applies, then there is a corresponding brain state correlated with it. The word does not denote the brain state but its existence is required for the truth of an application of the word. A correlational semantics assigns this brain state to the word, not as its denotation but as an associated entity (we might include it in the word’s connotation).[1] Thus, there is a firm reality invoked in the semantics, unlike in pure expressivist theories, but it isn’t supposed to be part of what the word means. For example, the word “pain” applies to an individual just if that individual has a certain kind of correlated brain state (as it might be, C-fiber firing). The semantics does not assign the property of pain to the word “pain”, for there is no such property (like the putative truth property). We use the word because we don’t know enough about the individual’s brain to make a more informed statement, and we have practical aims to fulfill, but there is no real property that we thereby denote. There is no fact over and above the brain state, but we talk as if there is for purposes of convenience and practicality (like with truth). Mental words express pseudo-properties (if you like that way of talking).

You might argue that pain and truth differ in a crucial respect, namely that we have an impression of a mental property in the pain case but we have no impression of a truth property. We just have the predicate “true” (and the concept) but we have more than the predicate “pain” (and the concept)—we have the feeling of the property. No doubt there is something right in this, but how much does it prove? Does it prove that there is a distinct property of pain, analogous to a physical property? Not obviously, and the type of theorist I am envisaging will not give in so easily—what if the feeling in question is illusory? I doubt that Russell would give up his theory of existence just because someone asserted (however correctly) that he had an impression of existence as a first-order quality, or that Ayer would throw in the towel when someone objected that he had an impression of goodness as a primitive objective quality. In any case, I am not trying to defend the approach I am describing against all objections; I am just trying to spell out what a coherent view of this type would look like. No doubt such a view, radical as it is, would face the usual philosophical argy-bargy. The intuitive idea powering the correlational-redundancy theory is simply that mental language may not be correctly modeled on other types of language, especially physical language; it may have a type of semantics all its own, contrary to appearances. Surely the brain plays a pivotal role in fixing the mind, and this ought to show up in the semantics. Compare: surely the properties of possible worlds (assuming they exist) play a role in fixing modal facts, and this should show up in modal semantics—hence possible worlds semantics. The realities should shape the semantics, and the brain is as real as it gets. In the end, the mind reduces to the brain (possibly under novel concepts) and we want to reflect this in our theory of mental language. Thus, we get a kind of semi-fictionalist redundancy theory of the mind joined to a correlational semantics of our current mental discourse. The same kind of theoretical structure can be applied to ethics: moral words don’t denote real properties (according to non-cognitivism) yet they have an expressive use and can be treated to a correlational semantics of linked non-moral properties (the descriptive properties on which they supervene). True, this kind of structure is unfamiliar in the semantical tradition (while borrowing from it), and is moderately complex, but it does have some reasonable motivations and precedents. Isn’t it highly likely that the grammatical structures of our language, themselves limited and regimented and uniform, might conceal a good deal of semantic variety that takes some effort to excavate? Mental language, in particular, is constructed from linguistic materials originally employed for other purposes (chiefly physical description), and there is no presumption that the ontology and epistemology of the mind will be subsumable under this format. The hiddenness of the brain, along with its immense complexity, must surely shape the way we talk about the mind, as much by its conspicuous absence as its presence. It would be different if the brain’s workings were open to view and easily discerned—what would our mental discourse look like then? Mental language has evolved as a makeshift compromise, largely practical in function, not as ideal science (it leaves us enormously ignorant of other minds, and of our own). Semantics reflects epistemology. Probably our mental language, and its semantics, will change with increasing knowledge of the brain. Eventually, a respectably denotational semantics will come to apply—or so it will be said.

A feature common to all the cases we have considered is that language has a tendency to suggest simplicity where complexity obtains. The simple subject-predicate sentence suggests the simple object-property model, with the property assimilated to familiar perceptual properties of things. Everything gets compared to perceived color and shape. But it turns out that things are always more complicated—even color and shape. The world is complex and multifarious, deep and hidden. Existence is an abstract construction from propositional functions (and it seemed so simple!). Definite descriptions are really quantified propositions with a uniqueness clause tucked in, not simple referring expressions. The truth predicate is a strange disappearing device for avoiding repetition, not the name of a property. Color is some kind of hard-to-pin-down propensity or disposition to cause experience, not a categorical property of objects. Goodness is not a simple unanalyzable quality, but a complicated practice of emotional reaction (allegedly). The little word “must” denotes a huge collection of complex entities called possible worlds. Words like “belief” and “desire” turn out to express complicated arrangements of brain parts and behavior, not simple qualities of consciousness. Inductively, we should not be surprised when the simple object-property structure turns out to be inadequate to the facts. Practicality favors brevity and simplicity, but philosophical understanding may need more capacious schemes. The brain needs to be brought in somehow, but not in the simple way proposed by classic property-identity theories. It took a while to find a semantics for modal language; no doubt the same is true for mental language. It is striking how little progress has been made in this direction. We might need a completely new way of doing semantics in order to represent mental language adequately.[2]

 

[1] See my “On Denoting and Connoting” on the proper use of the word “connote”.

[2] It could turn out that what we call mental language is semantically heterogenous within its own domain. Maybe sensation words and propositional attitude words function differently, not to mention words for emotions or character traits. The unconscious might differ semantically from the conscious, being closer to the brain conceptually. Folk psychology is more likely to be ripe for elimination than scientific psychology (or vice versa). Psychological semantics is in its infancy. Whether this would help with the mind-body problem remains to be seen.

Share

Philosophy at the Dentist

Philosophy at the Dentist

Yesterday I was having a new crown fitted at the dentist (king at last!). It is not a pleasant procedure, though I’ve had worse. Apart from the discomfort, it is boring. I decided to try an experiment: see if I can think about philosophy while being drilled and scraped orally. Surprisingly, I found it possible. I thought about the paper I had been writing that morning (“Predicating and Necessity”). This proved helpful in enduring what my mouth was going through. I mentioned it to the surgeon who said it must be very useful to have that ability; I agreed. Oddly enough, I don’t find that other topics of thought have the same effect: somehow when I think about philosophy my brain goes into a special state of removal from the world around me. Is this because I have done it so often over a whole lifetime that the tracks are laid down in my brain? I wonder if other people have the same experience. If so, it could be beneficial as a form of therapy or meditation or simply dentist toleration.[1]

[1] I also had an interesting discussion with my hygienist about why only the soles of the feet and the roof of the mouth are associated with tickling of a peculiarly intense kind—really not pleasant at all. I can think of no evolutionary or other explanation of this phenomenon. Do all people have it? Why just those areas? What about animals? It seems like a profound mystery of human physiology.

Share

Predicating and Necessity

Predicating and Necessity

(Bear in mind Kripke’s Naming and Necessity when reading the following.) When a speaker uses a predicate (“man”, “cat”, “rose”, “square”, “red”, “runs”, “clever” etc.) he or she refers to a property or attribute: but how is this done? One answer is by means of ostension: the speaker points to an instance of the property, being momentarily acquainted with it (as Russell would say). Another answer is that the speaker has in mind a description of the property or attribute: “large-brained speaking biped”, “furry feline with sharp claws”, “nice smelling pretty flower”, “four-sided figure with equal angles”, “the color of British mail boxes”, and so on.[1] The first theory could be called the direct reference theory; the second theory could be called the description theory. The meaning of the predicate is given by the property it directly refers to, or by the descriptive concepts the speaker uses to identify that property.[2] There is descriptive mediation, or there is not. The proposition expressed either contains the reference directly and intrinsically, or it contains only the concepts used by the speaker to latch onto that reference. The description theory of predicate meaning looks plausible and powerful—it is hard to see how it could fail to be the case (we can use predicates and not be in a position to point to instances of the properties they denote). Yet the theory seems demonstrably false; and the arguments against it are quite obvious. First, speakers can make mistakes about the properties they refer to—for example, British mail boxes might not be red (this is an illusion they give off). Second, speakers may not have sufficient information to identify the property uniquely—many types of flowers can be described as pretty and nice smelling. Third, it is not generally analytic to couple a predicate with a description of its denotation—it may just be a contingent empirical fact. Fourth, a predicate “rigidly expresses” its associated property but a speaker’s descriptive beliefs about it are generally not thus rigid—in some possible worlds, cats don’t have sharp claws, though another species may. Fifth, at some point we will reach descriptions that have no definition in terms of other descriptions, so that the predicates used in the descriptions will not be explicable in terms of the description theory. Sixth, syntactically speaking simple predicates are not complex in the way envisaged by the description theory—they are not pieces of shorthand, not disguised descriptions. Seventh, nothing like this actually passes through the mind of the average speaker: he or she just doesn’t bring to bear this sort of descriptive knowledge. Eighth, the whole model of descriptively mediated reference smacks of overgeneralization: definite descriptions work by singling something out by means of uniquely identifying description, but not every device of language works like a definite description—take proper names or demonstratives or pronouns or sentence connectives. Thus, the description theory of predicating runs into decisive refutation, despite its apparent attractions.

I have just run through Kripke’s arguments (and some others) against description theories of names for the case of predicates (common nouns, adjectives, verbs). I said nothing about ordinary proper names. So, those arguments have nothing specifically to do with names; they apply also to predicates. It isn’t that namesconstitute a special problem for a description theory of reference; predicates do too. The substance of Naming and Necessity could have been presented under the title Predicating and Necessity. Kripke himself extends the doctrines he puts forward to the case of “common names”, so he implicitly acknowledges (indeed asserts) that his critique applies also to this category of expression. But he could have proceeded by first mounting his critique for the case of predicates and then extending it to proper names. He could have begun with natural kind terms and moved on to the case of names of individuals. And, if the description theory doesn’t work for descriptive predicates, it is hardly going to work for non-descriptive names. Neither category of expression is to be understood as abbreviations of descriptive definitions. On the positive front, there is nothing to prevent Kripke from offering his theory of initial baptism and chain of communication in the first instance to predicates, then extending it to proper names. Long ago the English language baptized squareness “square”, then the word was passed along from person to person in a reference-preserving chain—with no descriptive reference-fixing knowledge necessary. First, a causal theory of predicates; then, a causal theory of names. There is nothing distinctively name-oriented about this conception. The focus on names is entirely adventitious in these debates. In fact, names are a relatively marginal feature of natural languages; predicates are where the real work gets done. For some reason, proper names became the focus of discussion, going back to Mill and Russell (not so much Frege), and Kripke is simply following in this tradition; but it gives a skewed impression of what the real issues are. If Kripke had not extended his discussion to the case of common nouns, he would clearly have distorted the import of his arguments—as if they concerned only the very limited part of language comprised of proper names (of people and places). In fact, they apply to a wide region of language—not only common nouns but also predicates in general. The title should have been Naming, Predicating, and Necessity(or better Names, Predicates, and Necessity, since Kripke’s book is not so much about naming as an action as about names as a semantic category).[3]

Finally, names and necessity: does the former have anything particularly to do with the latter? It does not. This is by now old hat: names have no more to do with necessity (epistemic or metaphysical or analytic) than other classes of expression—definite descriptions, indexical words, predicates, connectives, quantifiers. In particular, definite descriptions can be as rigid as names (“the successor of 3”). De re necessity has nothing to do with names as such, being quite independent of language. We can express the necessity of water being H2O either by using “Water is H2O”, where the terms are used as names (singular terms), or by using “Anything that is water is H2O”, where the terms are used predicatively. We can either use “Heat is molecular motion” or “If something is hot, it has high molecular motion”. Both formulations express the fact that a certain de renecessity obtains. When Kripke begins his lectures by saying that he hopes people see some connection between the two topics of the title, he is being misleading (and has misled many readers). I suppose there is “some connection” (everything is connected to everything else somehow), but there is no special proprietary connection between names and necessity—whether the necessity is analytic or synthetic, de dicto or de re. It is not even true that names form necessarily true identity statements but descriptions never do, since descriptions can be rigid (as well as occur with wide scope). When I say “Nothing can be red and green at the same time” there is no name in sight, only general color predicates, yet my statement is as necessary as can be; so yes, there is “some connection” between predicates and necessity! As there is “some connection” between connectives and necessity (look at the theorems of propositional logic). If we read Naming and Necessity as claiming that names are uniquely not open to a description theory and uniquely connected to necessity, then we read a lot of error into that classic text; but I don’t think anything important hangs on those claims, mistaken as they are. It can all be rephrased to avoid such mistakes. Surely Kripke would agree.[4]

[1] A variant of this approach, Russellian in spirit, would opt for descriptions that refer only to sense-data, as in “the animal species that causes such-and-such sense-data” or “the property that seems thus-and-so”. There is something empiricist about the traditional enthusiasm for description theories.

[2] The famous deeds version of the description theory of names of people finds a parallel in the “conspicuous instance” description theory of predicates, as in “the shape of the earth” for “spherical” or “the kind of animal that my pet Tabby belongs to” for “cat”. This is the kind of knowledge that ordinary speakers may be expected to possess.

[3] We could also choose to treat names as predicates: the name “John Smith” is parsed as “a John Smith” or “he John Smiths”. Then we would have assimilated names to predicates, treating predicates as basic in logical form.

[4] It is puzzling to me why Kripke falls into these incautious formulations, or fails to warn against natural but incorrect interpretations of his words and procedure. It is difficult to believe he didn’t see the points I am making, which are hardly earth-shattering. Nor was he much of a respecter of intellectual tradition. Let me also add that his classic discussion of the identity theory of mind and brain does not depend on the assumption that “pain” is a name as opposed to a predicate.

Share