Semantical Considerations on Mental Language

Semantical Considerations on Mental Language

I am going to expound a view of the mind (and hence the mind-body problem) that I am disinclined to accept. Still, the view deserves careful articulation and may contain elements of truth; it should be added to the menu of options. And it is agreeably radical. It begins with considerations on the workings of mental language and takes off from a recognizable tradition. The tradition I have in mind contends that various bits of language have a misleading form: they seem one way but are really another way. Logical (semantic) form does not mirror grammatical (syntactic) form; and we tend to be mesmerized by the latter and oblivious to the former. We have to fight our tendency to take surface form too seriously. Thus: quantifier words are not singular terms but second-order functions (Frege); the word “exists” is not an ordinary predicate but concerns a propositional function (Russell); definite descriptions are not referring expressions but should be analyzed by means of quantifiers (Russell); the word “true” does not denote a property but acts as a redundant device to avoid repetition; the word “good” does not denote a simple quality but is used to express emotion (Ayer) or to make a prescription (Hare); the words “I promise” are not used to make a statement but to perform an action (Austin); words for colors don’t stand for intrinsic qualities of objects but for propensities to elicit experiences (many people); arithmetical sentences look factual but are really fictional (mathematical fictionalists); mental words seem to denote inner states but are really ways of summarizing overt behavior (Ryle); words in general seem to us to be names but they are really of many different kinds (Wittgenstein). The broad thrust of these positions is that language is not homogeneous and what seems like the name of an attribute may not be. At the extreme such a view insists that language is never denotational and properties are a myth; all there is to language is use, inference, grammar, linguistic practice. The view I am going to discuss claims that mental language falls into this category: it doesn’t consist of symbols standing for properties or attributes aptly called “mental” (simple, sui generis, distinct and apart) but rather has a different kind of semantics altogether (what this is we will come to). In particular, it borrows from the redundancy theory of truth and the non-cognitivist view of ethics. It treats mental language as strictly dispensable and non-denoting (“non-factual”). It is not about anything real, though it has practical value and is not entirely fictional. The brain comes into it.

It will be best if I just plunge in and come right out with it. Mental language is strictly redundant because the brain contains all that is necessary to record the facts in question: once you have stated all the facts about the brain, you have said all there is to say about the mind. We must immediately add that facts about the brain are not limited to currently known facts, or even conjectured or imagined facts. The basic idea is that there is no further substance over and above the brain whose distinctive properties constitute the mind; there is just the brain and its properties. There are no additional mental facts. Once you know all about the brain you know all about the mind. This knowledge may or may not include concepts we currently apply to the mind. Similarly, there are no facts about truth over and above ordinary facts: there is no more to a proposition being true than what is contained in the proposition—you don’t add anything to a proposition by saying it is true. In principle, the word “true” is eliminable; we use it now because of certain limitations on our knowledge (as in “Whatever the pope says is true”). We don’t as things stand know much about the brain, especially what is going on in it at any given moment, so we resort to mental language to fill the gap; but really, there is nothing happening in a person’s mind other than what is going on in their brain. How could there be? The brain is all there is mind-wise; there is no semi-detached mental substance. Accordingly, our usual talk of mind is strictly redundant, though practically necessary; it is really an indirect way of alluding to the brain. Instead of saying someone has a certain brain state right now, we say they have a belief or desire; but there is nothing going on except the brain state, which is hidden from us. Our mental words don’t denote or describe this brain state, though they may be said to allude to it, so they don’t introduce any real property that people possess—as the word “true” denotes no real property of things. We may be under the illusion that these words denote real properties, but careful analysis reveals that they do not. There is no additional fact for them to express.

So, how should we interpret speech acts containing mental words?  We might venture an expressivist semantics for mental language analogous to an expressivist view of truth language: mental talk expresses our attitudes towards people and animals without attributing any properties to them, as our truth talk merely expresses our attitudes of commendation or fellow feeling. Less drastically, we might follow Ryle in thinking of mental discourse as the issuing of “inference tickets” conceived as permissions to draw various inferences about the individual so described. In the terms of more recent philosophy, we might speak of “inferential semantics” as opposed to “truth-conditional semantics”: these words aren’t about anything (objects, properties) but they enable us to make predictions and offer explanations. I will propose something more novel and geared to the particular case we are considering: what I will call “correlational semantics” as opposed to “denotational semantics”. Correlated with a given mental state (so called) we have a brain state, which is also correlated with a use of a mental word: if the word applies, then there is a corresponding brain state correlated with it. The word does not denote the brain state but its existence is required for the truth of an application of the word. A correlational semantics assigns this brain state to the word, not as its denotation but as an associated entity (we might include it in the word’s connotation).[1] Thus, there is a firm reality invoked in the semantics, unlike in pure expressivist theories, but it isn’t supposed to be part of what the word means. For example, the word “pain” applies to an individual just if that individual has a certain kind of correlated brain state (as it might be, C-fiber firing). The semantics does not assign the property of pain to the word “pain”, for there is no such property (like the putative truth property). We use the word because we don’t know enough about the individual’s brain to make a more informed statement, and we have practical aims to fulfill, but there is no real property that we thereby denote. There is no fact over and above the brain state, but we talk as if there is for purposes of convenience and practicality (like with truth). Mental words express pseudo-properties (if you like that way of talking).

You might argue that pain and truth differ in a crucial respect, namely that we have an impression of a mental property in the pain case but we have no impression of a truth property. We just have the predicate “true” (and the concept) but we have more than the predicate “pain” (and the concept)—we have the feeling of the property. No doubt there is something right in this, but how much does it prove? Does it prove that there is a distinct property of pain, analogous to a physical property? Not obviously, and the type of theorist I am envisaging will not give in so easily—what if the feeling in question is illusory? I doubt that Russell would give up his theory of existence just because someone asserted (however correctly) that he had an impression of existence as a first-order quality, or that Ayer would throw in the towel when someone objected that he had an impression of goodness as a primitive objective quality. In any case, I am not trying to defend the approach I am describing against all objections; I am just trying to spell out what a coherent view of this type would look like. No doubt such a view, radical as it is, would face the usual philosophical argy-bargy. The intuitive idea powering the correlational-redundancy theory is simply that mental language may not be correctly modeled on other types of language, especially physical language; it may have a type of semantics all its own, contrary to appearances. Surely the brain plays a pivotal role in fixing the mind, and this ought to show up in the semantics. Compare: surely the properties of possible worlds (assuming they exist) play a role in fixing modal facts, and this should show up in modal semantics—hence possible worlds semantics. The realities should shape the semantics, and the brain is as real as it gets. In the end, the mind reduces to the brain (possibly under novel concepts) and we want to reflect this in our theory of mental language. Thus, we get a kind of semi-fictionalist redundancy theory of the mind joined to a correlational semantics of our current mental discourse. The same kind of theoretical structure can be applied to ethics: moral words don’t denote real properties (according to non-cognitivism) yet they have an expressive use and can be treated to a correlational semantics of linked non-moral properties (the descriptive properties on which they supervene). True, this kind of structure is unfamiliar in the semantical tradition (while borrowing from it), and is moderately complex, but it does have some reasonable motivations and precedents. Isn’t it highly likely that the grammatical structures of our language, themselves limited and regimented and uniform, might conceal a good deal of semantic variety that takes some effort to excavate? Mental language, in particular, is constructed from linguistic materials originally employed for other purposes (chiefly physical description), and there is no presumption that the ontology and epistemology of the mind will be subsumable under this format. The hiddenness of the brain, along with its immense complexity, must surely shape the way we talk about the mind, as much by its conspicuous absence as its presence. It would be different if the brain’s workings were open to view and easily discerned—what would our mental discourse look like then? Mental language has evolved as a makeshift compromise, largely practical in function, not as ideal science (it leaves us enormously ignorant of other minds, and of our own). Semantics reflects epistemology. Probably our mental language, and its semantics, will change with increasing knowledge of the brain. Eventually, a respectably denotational semantics will come to apply—or so it will be said.

A feature common to all the cases we have considered is that language has a tendency to suggest simplicity where complexity obtains. The simple subject-predicate sentence suggests the simple object-property model, with the property assimilated to familiar perceptual properties of things. Everything gets compared to perceived color and shape. But it turns out that things are always more complicated—even color and shape. The world is complex and multifarious, deep and hidden. Existence is an abstract construction from propositional functions (and it seemed so simple!). Definite descriptions are really quantified propositions with a uniqueness clause tucked in, not simple referring expressions. The truth predicate is a strange disappearing device for avoiding repetition, not the name of a property. Color is some kind of hard-to-pin-down propensity or disposition to cause experience, not a categorical property of objects. Goodness is not a simple unanalyzable quality, but a complicated practice of emotional reaction (allegedly). The little word “must” denotes a huge collection of complex entities called possible worlds. Words like “belief” and “desire” turn out to express complicated arrangements of brain parts and behavior, not simple qualities of consciousness. Inductively, we should not be surprised when the simple object-property structure turns out to be inadequate to the facts. Practicality favors brevity and simplicity, but philosophical understanding may need more capacious schemes. The brain needs to be brought in somehow, but not in the simple way proposed by classic property-identity theories. It took a while to find a semantics for modal language; no doubt the same is true for mental language. It is striking how little progress has been made in this direction. We might need a completely new way of doing semantics in order to represent mental language adequately.[2]

 

[1] See my “On Denoting and Connoting” on the proper use of the word “connote”.

[2] It could turn out that what we call mental language is semantically heterogenous within its own domain. Maybe sensation words and propositional attitude words function differently, not to mention words for emotions or character traits. The unconscious might differ semantically from the conscious, being closer to the brain conceptually. Folk psychology is more likely to be ripe for elimination than scientific psychology (or vice versa). Psychological semantics is in its infancy. Whether this would help with the mind-body problem remains to be seen.

Share
5 replies
  1. Free Logic
    Free Logic says:

    Isn’t the view you are “disinclined to accept” called eliminative materialism? The Churchlands were advancing it about 40 years ago.

    Reply
  2. Oliver S.
    Oliver S. says:

    If I understand you correctly, your “correlational semantics” is a physicalist-realist truthmaking semantics, according to which all nonphysical, particularly psychological “higher-level” truths are made true by (nothing but) physical entities; and your “denotational semantics” is an antirealist reference semantics, according to which all nonphysical, particularly psychological “higher-level” statements are about or refer to nonreal/fictional (and thus nonexistent) nonphysical items.

    Therefore, given the dissociation of truthmaking and reference, what makes true nonphysical statements true isn’t part of what they are about; and what they are about isn’t part of what makes them true (since nonexistent things cannot function as truthmakers). Nonphysical truths are about nonexistent nonphysical items, but they are grounded in existent physical items that make them true.

    Reply
    • Colin McGinn
      Colin McGinn says:

      That states it pretty accurately, with one caveat: I don’t suppose that the brain state must be “physical” (a term I generally deplore). I think the brain can be described in many ways and some of these may lie outside what we think of as physics or current physiology.

      Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.