Is Solipsism Logically Possible?

 

 

Is Solipsism Logically Possible?

 

 

It has been commonly assumed that solipsism is logically or metaphysically possible. I could exist without anything else existing. There are possible worlds in which I exist and nothing else does. I can imagine myself completely alone. Seductive as such thoughts may appear, I think they are mistaken; they arise from a confusion of metaphysical and epistemic possibility.

            Suppose someone claims that this table in front of me could exist in splendid isolation, the sole occupant of an ontologically impoverished world—no chairs, planets, people, birds, etc. Well, that seems true—those absences are logically possible. But what about the piece of wood the table is made of? This table is made of that piece of wood in every possible world in which it exists, so the table cannot exist without the piece of wood. But that piece of wood came from a particular tree—it could not have come from any other tree. So this table can only exist in a world that also contains the tree in question, since it was a part of that tree. The table and the tree are distinct existences, so the table cannot exist without something else existing—the tree that donated the part that composes it. The table is necessarily composed of that piece of wood and that piece of wood necessarily derives from a particular tree: there are necessities linking the table with another object, viz. the tree. Thus “solipsism” with respect to this table is not logically possible.

            Now consider a person, say me. I could not exist without my parents existing, since no person could be thisindividual and not be born to my parents. This is the necessity of origin as applied to persons. In any world in which I exist my parents exist; more precisely, in any world in which I exist a particular sperm and egg exist (and they can exist only because of the human organisms that produced them). So my existence implies the existence of my parents. Therefore solipsism is not logically possible. But the existential ramifications go further: my parents cannot exist in a world in which their parents don’t exist. And so on back down the ancestral line, till we get to the origin of life: no later organism can exist without the procreative organisms in its ancestral line. Every organism has an origin, and that origin is essential to its identity. But it goes even further, because the very first organism must have had its own inorganic origin, presumably in a clump of molecules, and that origin is essential to it—itcould not exist without that clump existing. And that clump of molecules also had an origin, possibly in element-forming stars; so it couldn’t exist without the physical entities that gave rise to it. And those physical entities go back to the big bang, originating in some sort of super-hot plasma. So I (this person) could not exist unless the whole chain existed, up to and including certain components of the big bang. Colin McGinn could not exist without millions and millions of other things existing, granted the necessity of origin. I am linked by hard necessity to an enormous sequence of distinct particulars. I couldn’t be me without them.

            Of course, there could be someone just like me that exists in the absence of my specific generative sequence—though he too will necessarily carry his own generative sequence. Perhaps in some remote possible world this counterpart of mine arises not by procreation but by instantaneous generation—say, by lightning rearranging the molecules in a swamp. But even then that individual would not be able to exist without his particular origins—his collection of swampy molecules and that magical bolt of lightning. Solipsism will not be logically possible even for him. In any case, the question is irrelevant to whether I could exist without my generative sequence: my counterparts are not identical to me. All we are claiming is that solipsism is logically impossible so far as I am concerned—this specific human being. It is my existence that logically (metaphysically) requires the existence of other things—lots of other things. I (Colin McGinn) could never exist in another possible world and peer out over it to find nothing but myself (at least throughout history–I might exist without any other organism existing at the same time as me, my parents both being dead). The same applies to any person with the kind of origin I have, i.e. all human beings.

Why do we feel resistance to these crushingly banal points? I think it is in part because we confuse a metaphysical question with an epistemological question; and we cannot answer the epistemological question by appealing to our answer to the metaphysical question. The epistemological question is whether I can now provethat solipsism is false: can I establish that I am not alone in the universe? In particular, can I establish that my parents really exist (or existed)? Maybe they are just figments of my imagination; maybe I was conceived by lightning and swamp. I cannot be certain that I was not. I cannot even be certain that I have a body. I can establish that I think and exist, but I cannot get beyond that in the quest for certainty. So the existence of my parents is not an epistemic necessity. If I could prove that I am a member of a particular biological species, then maybe I could prove that I must have arisen by sexual reproduction from other members of that species: but the skeptic is not going to let that by–she will demand that I demonstrate that I am a particular kind of organism arising by sexual reproduction. And I will not be able to meet that challenge, since there are conceivable alternatives to it (the hand of God, swamp and lightning, the dream hypothesis). Maybe I just imagine that I am a biological entity with parents and an evolutionary history. So we cannot disprove solipsism in the epistemological sense: for all I know, there is nothing in the universe apart from me.

But this is perfectly compatible with the thesis that it is not in fact logically possible for me to exist without other entities existing along with me: for if I am a biological entity born by procreation, then my existence logically implies the existence of many other things. It is just that I cannot prove to the skeptic’s satisfaction (or my own) that that is what I am. I might come to the conclusion that I had no parents after all, but that will not make it the case that there are metaphysically possible worlds in which I had no parents—this is a matter of the facts about me, not my beliefs about the facts. Thus solipsism is an epistemic possibility but not a metaphysical possibility. It is just like the table being both necessarily made of wood (metaphysical) and also being possibly not made of wood (epistemic). Given that I arose from biological parents, I necessarily did; but it is an epistemic possibility that I did not so arise—I could be mistaken about this.

            It would be nice to disprove solipsism, but it isn’t insignificant to show that it is not in fact logically possible, given the actual nature of persons. Persons are the kind of thing that implies the existence of other things (granted that we are right in our commonsense view of what a person is). In this they resemble many ordinary biological and physical entities, which also have non-contingent origins. We may feel ourselves to be removed from the world that surrounds us, as if we are self-standing individuals, ontologically autonomous—as if our essential nature could subsist alone in the world. But that is a mistake—we are more dependent on other things than we are prone to suppose. We are more enmeshed in what lies outside of us than we imagine. We suffer from illusions of transcendence and autonomy. We are not free-floating egos that owe no allegiance to anything else; we are essentially relational beings, our identity bound up in our history. We cannot be metaphysically detached from our origins, proximate and remote.

            The same point applies to our mental states: they too cannot be separated from other things. Could this pain exist in complete isolation? That may seem like a logical possibility, but on reflection it is not: first, this pain’s identity depends on its bearer—it could not be this pain unless it had that bearer; and second, the identity of the bearer depends on the kind of history it has. So this pain could not exist without the generative sequence that gave rise to its bearer, a particular living organism; and that depends upon billions of years of history, going back to the big bang (and before). There is no possible world in which this pain exists and certain remote physical occurrences don’t exist. There are necessary links connecting present mental states with remote physical occurrences—from the joining of a particular sperm and egg, to the origin of mammals, to the production of chemical elements. My pains can’t exist in a world without me (you can’t have my pains), but I can’t exist in a world without my parents, and my parents can’t exist in a world without their remote primate ancestors, and these ancestors too had their own necessary origins. The pains that now occur on planet Earth (those pains) could not exist in a possible world without an elaborate biological and physical history that coincides with their actual history.

            It is an interesting fact that we recognize these necessities. On the one hand, we have quite strongly Cartesian intuitions about the person and the mind, which is why dualism and solipsism appeal to us—these seem like logical possibilities. But on the other hand, we are willing to accept that the person and mind are tied to other entities with bonds of necessity—as with the necessity of personal origin. We recognize that the identity of a person cannot be radically detached from all extrinsic and bodily things—parents, sperms, and eggs. These are anti-Cartesian intuitions insofar as they dispute the self-subsistence of the self.  [1] We are thus both Cartesian and anti-Cartesian in our modal instincts about persons. It is as if we know quite well that the self cannot be a self-subsistent non-material substance without logical ties to anything beyond itself, even though in certain moods we fall prey to such thoughts. We know that our essence implies the existence of other things—as demonstrated by the necessity of origin—and therefore solipsism is not in fact logically possible. We are modally ambivalent about self and mind, but not confused.

 

Colin McGinn

  [1] Kripke mentions the anti-Cartesian consequences of the necessity of origin at the very end of Naming and Necessity (footnote 77, p. 155). What is surprising is that neither he nor anyone else seems to have noticed the consequences for solipsism (including myself, and I published an article on the necessity of origin in 1976). But it is really just a fairly obvious deduction from the necessity of origin (originally proposed by Sprigge in 1962, as Kripke notes).

Share

A Puzzle About Concepts

                                                A Puzzle About Concepts

 

 

When concepts enter into thoughts they occur attributively: that is, they are attributed to some designated particular or particulars. They are constituents of thoughts, and thoughts are true or false according to whether the concepts attributed hold of the things to which they are attributed. They occur in a propositional context, not in isolation. One might think that they can occur in no other way, but we also have the locution “the concept C”: here we appear to be designating the concept C not attributing it. It occurs in thought in a non-attributive manner, attached to no particular. The distinction is analogous to the distinction between the use and mention of words. Words are usually used in utterances of sentences, but we can also mention them, typically by employing quotation. We can say, “Consider the word ‘red’” as well as “The car is red”. Just as we have a use-mention distinction for words, so we have an attribution-designation distinction for concepts. That is not surprising, given the close connection between words and concepts. And it allows us to ask which came first: is the distinction for words derivative on the distinction for concepts or vice versa? If there is a language of thought, we will want to explain the distinction for concepts in terms of the distinction for internal words: attributing concepts is using words internally and designating concepts is mentioning them internally (by some analogue of quotation).

            The puzzle I want to raise (but not solve) concerns the form in which concepts exist before entering our conscious thoughts. Let us allow that concepts are stored pre-consciously before they are employed in conscious thought: how then do they exist in the preconscious–in the mode of attribution or designation? Are they being used or mentioned? Neither alternative looks appealing, but nothing else suggests itself: hence the puzzle. They cannot be occurring attributively because that would imply that they (all of them) are constantly being attributed to things unconsciously. Surely we are not always thinking unconscious thoughts as a way to keep our concepts in existence. But it is also implausible to suppose that they are being mentally mentioned: we are not always taking them as objects of unconscious thought—putting them in mental quotation marks. We don’t store concepts in the preconscious in virtue of referring to them there. They exist pre-consciously but not by way of a mental act of quotation. Nor are they being constantly attributed. They are not present in either the mode of mention or the mode of use–yet they are present. Similarly, our vocabulary, stored in preconscious memory, does not exist either in the mode of use or mention: we are not constantly using the words in our vocabulary to make unconscious utterances, but neither do we store them in the form of quotation. They exist in some neutral mode, neither used nor mentioned.

            Thus concepts and words do not exist in the preconscious in either of the two ways they exist in thought or speech (use and mention). This third way has no counterpart in thought and speech, but it is presupposed by thought and speech. We extract concepts and words from the preconscious and either mention them or use them, but they are not already being mentioned or used in their preconscious state. They are… And here we draw a blank: we don’t have a perspicuous description of their mode of existence—and it is hard to form a conception of what this mode might be.  [1] We can say that concepts and words are stored in the preconscious (though what this amounts to a longstanding problem in psychology), but this doesn’t tell us anything about how they are stored. My point is that the standard notions of use and mention (attribution and designation) don’t apply. Hence there is a puzzle about concepts (and words). We have no understanding of how concepts exist in the mind.

 

  [1] It is tempting to suppose that they exist as items on a list, but a list consists either of a sequence of used words or mentioned words. If I write a list of all the students in my class by inscribing their names, I am using their names to refer to them; but if I write a list of their names, I am mentioning the names (not the students). In the former case each inscription constitutes an act of reference to a student, though not in the latter case. Are we to suppose that words and concepts in their preconscious form are items on a list that refer to external things, or do they exist merely as mentioned symbols? Neither option looks attractive. (The semantics of lists could use further investigation.) And if words and concepts are stored in the form of lists, what is the principle of ordering for the list? In what sense does each item hold a place in a list?

Share

Invisible Hands

                                                            Invisible Hands

 

 

Here is a simple experiment you can do at home. Hold your hand a foot in front of your face with the thumb turned towards you, so that you are seeing your hand in profile.  Have a look at it: it will look like a normal solid object in front of you. Now shift your gaze to fixate on an object about ten feet behind your hand, keeping your hand in the same position. You will find that your hand doubles in front of you (you may have to shift your hand slightly to get this effect): you see two hands not one. If you now close one eye you will find that the second hand disappears, only to reappear if you open both eyes. The reason for this effect is binocular disparity: each eye receives separate images of the hand, which are usually synthesized by the brain, but in this setup the two images are not synthesized—focusing on the distant object inhibits binocular synthesis. Why this is so is hard to explain—we don’t normally experience a doubling of nearer objects as we fixate on more distant objects—but it is a robust phenomenon. It works not only with hands, of course—books will produce the same doubling effect.

            From this simple experiment we can draw two conclusions, by no means trivial. First, there exists a perceptual unconscious: whenever we see anything there are two images in the mind derived from the two eyes. Unlike the image that results from binocular synthesis, these images are unconscious: you are seeing those two hands unconsciously whenever you see your one hand consciously. Every (binocular) visual perception involves processing a pair of visual representations corresponding to the image on each retina. The experiment you just performed merely brings what is normally unconscious to consciousness. Second, it is possible to do psychology based purely on introspective evidence, and indeed it is hard to see how the doubling of the image could be detected without the use of introspection. Of course, the effect can be checked inter-subjectively, but for each subject the crucial information is derived from introspection. There is nothing methodologically wrong with that.

            There is another effect that can be obtained from the experimental setup described. If you position your hand correctly, while fixated on a distant object, you will find that your hand, or a part of it, disappears from view: it becomes transparent or simply invisible. This is a disconcerting phenomenon, as your hand appears to melt away, losing solidity and opacity. It is not easy to describe exactly what happens: it is not as if your hand is suddenly removed from your visual field; rather, it is seen as (partly) invisible. But you cannot see an invisible object! The hand is registering visually but it has been rendered transparent. Clearly the hand is still in front of your eyes and is having its usual impact on your retinas, sending visual signals to the brain; but the result is not a seen hand but an unseen hand, or a seen un-hand. You stop seeing what you are “seeing”. Again, this strange perceptual state results from binocular disparity: your two eyes are getting a full view of the distant object, because the two different images capture the full reality of the stimulus, without a gap—but there is an interposed hand there that is also seen. Presumably a visual system could just accept that the hand is not blocking your view of the distant object, which it clearly is not, and accordingly produce an image of both near hand and distant unblocked object. But our visual system does not do that—instead it does away with the image of the hand. It decides not to see the hand. Why? Because it operates under the assumption that if a solid opaque object is between your eyes and some distant object, occluding that object, then it must be the case that some portion of the distant object is not seen. If there is an occluding proximal object, then some of the distal object must be occluded. But if that distal object is not in fact occluded, because of binocular disparity, then the only conclusion the brain will accept is that there is no occluding proximal object—thus it “disappears” the occluding object. It tells you there is nothing solid and opaque there in order to explain why it is that you see the uninterrupted whole of the distant object.

The brain thereby comes to a false conclusion—a visual illusion—but it does so for intelligible reasons. It would rather believe that there is an invisible or transparent hand in front of you than that a visible hand can be interposed between you and a distant object the view of which is not blocked by that hand. It is perfectly possible to see around the hand from the angles afforded by the two eyes, so that the two retinal images of the distant object can be synthesized into a single continuous visual image; but the brain prefers to believe that the reason the distant object can be seen in its entirety is that the interposed object has been rendered invisible. It is as if the brain does not understand its own binocular disparity system! If you look at the distant object with one eye while holding your hand in front of you, your hand remains stubbornly visible and opaque, with no tendency to melt or clarify. The brain accepts the fact that its view has been blocked. But when two eyes are involved we have blocking without a break in the distant object—and this the brain finds unacceptable. Each eye fills in what the other lacks because of the interposed hand, thus giving an uninterrupted view of the distant object; so the brain simply removes the hand from the visual field. It resolves the paradox of the blocked-but-seen object by rendering the blocking object invisible.

Of course, your brain knows quite well that solid objects don’t just disappear because your eyes focus on different things, but it is prepared to draw that conclusion in the circumstances described. And it is under no illusion that your hand has literally disappeared or turned transparent, since it is right there in front of your eyes sending in the usual signals; the brain only renders it invisible because it is well aware that it is there. One can only wonder at the frantic unconscious reasoning that the brain goes through order to decide to make the hand invisible despite its obvious visibility.  

            It would be possible in theory to attach a blocking object to the head in such a way that the brain always renders it invisible: it would have to be positioned in just the right place between the eyes so that binocular disparity could do the requisite filling in. In this setup the brain would constantly countermand the evidence of its own eyes and treat the object as if it wasn’t there, or it might be taken to exist in a transparent ghostly form. The object would instantly leap into visibility if the subject were to close one eye, but if monocular vision were impossible for the subject the interposed object may never be detected—though it is detected by the subject’s brain, only to be “disappeared”. The object would be unconsciously seen but consciously unseen. The brain sees it but it sends out instructions to the conscious subject not to see it—the subject may never suspect what his brain knows very well.

            The brain constructs a unitary visual world from the disparate data supplied by the retinas of each eye. Normally this works smoothly enough and the perceiver doesn’t notice anything anomalous—the dual basis has no phenomenological counterpart. But in odd cases, even quite simple ones to arrange, the scaffolding of vision becomes apparent; then we experience perceptual anomalies. The “invisible hand” illusion is one of these, and it shows the complex nature of the underlying unconscious visual processes. The brain not only makes the world visible to us; it can also make it invisible, as the occasion demands. It can deny the evidence of the senses.

 

Colin McGinn        

 

 

Share

Indexical Semantics in the Language of Thought

 

 

Indexical Semantics in the Language of Thought

 

 

Accepting that there is an innate and universal language of thought, we can inquire into its formal characteristics. It will have two components: a syntactic component and a lexical component. These components will be found in every human being’s cognitive-linguistic repertoire (barring pathology), like any other innate human trait. There is no problem about this with respect to the syntactic component: there is no reason to doubt that each person uses an identically structured internal language. Nor is there any obvious problem about large tracts of the lexical component: people share a large number of basic concepts because they live in a common world of space and time, colors and shapes, other minds, plants and animals, logical and mathematical truth, ethical demands, etc. That is, the universality implied by the idea of an innate species-wide internal language is not contradicted by the facts of human psychology. However, there is a segment of the lexical component that does appear to present a problem for this picture—the words that refer to specific local objects, artifacts, and natural kinds in the individual’s environment. It is not plausible to suppose that people in foreign lands have names for the places, people, artifacts, and animal species found in this land. Generally, it can hardly be that words for local entities are genetically encoded in our species and enter the thoughts of every human on the planet. Yet we do use such parochial concepts to think about the world. So either the language of thought is not fully innate and universal or it innately covers a lot more than it is plausible to suppose that it does. How do we get out of this problem?

            The problem can be put like this: how do we find an interpretation for all such locally bound lexical items that is consistent with the absolute universality of the language of thought? What kind of semantics would allow us to declare that the “referential component” is universal to humans? It can’t be a semantics that simply assigns a unique entity to each such term, on pain of assigning the same entities to terms no matter the location of the individual in question—people from the jungles of the Amazon don’t have a name for London! Clearly we need a semantics that provides a uniform inner linguistic structure that combines with a contribution from the local environment. One way to do this would be to suppose that the innate language contains interpretation-free terms as well as interpretation-bound terms; the free terms pick up reference from the way the individual is contingently embedded in the world. The genes supply these initially meaningless terms, which are common to everybody, in order to allow for the future possibility of local reference, relying upon the embedding of the individual to provide them with an interpretation. Thus a single symbol S in the language of thought can come to refer in one land to London and in another land to a certain Amazonian village, having no intrinsic fixed meaning at the outset. We could call this the “interpretation-free component”—the part of the lexicon that requires a suitable embedding before it acquires any meaning.

            But there is another approach, akin to this one but without the assumption of initial meaninglessness, namely that the innate language of thought is heavily indexical. The form of this type of theory allows us to say that the lexical component is universal and semantically interpreted, while accepting that not everyone shares the same range of references. What we have is a universal language that gets tied down to particular entities by virtue of the context in which that language finds itself located. Semantically it’s like the word “I”: everyone has the same indexical word but context determines to whom it refers. Names are then introduced on the back of indexical expressions, as in, “Let ‘London’ denote this city”, where the name “London” is not part of the genetically given language of thought but the demonstrative “this city” is. The Amazonians and us share the underlying indexical apparatus but not the local terms that are subsequently tied to it. This solves the problem of reconciling linguistic universality with referential locality: the language is universal but its referential interpretation is local. The words of the language mean the same thing for everyone everywhere, but context links these words to different entities (which can subsequently be given names). Thus there is no interpretation-free (meaningless) component to the innate language, yet words of this language can receive different referential interpretations in different environments. That is how the genes solved the problem of parochial reference in a common language: they invented indexical semantics. Some sort of mutational and selective history led to a semantic structure that can deliver variation from uniformity, thus preserving the commonality of the language while combining it with referential diversity. The apparatus is common to all humans, though that apparatus gets applied to different entities in different contexts. It is the apparatus that is encoded in the genes, but that apparatus allows for non-genetic factors to fix reference in specific contexts. Thus the indexical component of the language of thought is what enables us to solve the problem of referential diversity.

            What is the evidence for the indexical theory of the language of thought? The indexical character of natural languages of course: natural languages are heavily indexical, and this reflects the character of the underlying language of thought. I won’t repeat all the arguments for recognizing the ubiquitous role of indexical expressions in natural languages (all natural languages); my point is just that the role of indexical expressions in solving the problem of universality is mirrored in the manifestly indexical character of spoken languages. Arguably, natural languages cannot perform their referential function without relying on indexical reference; it turns out that the underlying language of thought could not exist without a similar reliance. The use of an indexical apparatus is what is needed to make that language both biologically universal and environmentally variable. The lexical component needs an indexical component if it is to be possessed by all humans alike. Natural languages make this component visible. We are born indexical thinkers.  [1]

 

Colin McGinn

 

  [1] Thus the language of thought will not be a context-free logical language like first-order predicate calculus; it will be a context-dependent indexical language exhibiting the semantics of content and character in the style of David Kaplan.

Share

Impossible Meaning

                                               

 

 

 

Impossible Meaning

 

 

Here is an argument purporting to show that the word “blue” is meaningless. There are many specific shades of blue that have their own names: aquamarine, navy blue, cobalt blue, azure, cerulean, indigo, etc. With respect to each of these we have a specific concept or idea, as well as a specific type of visual experience. But the word “blue” is more general than any of these words: it includes them all while not being as specific as any. What kind of idea corresponds to it? The natural and traditional answer is that the idea of blue is an abstract idea—it abstracts away from the peculiarities of each shade of blue. We form the idea by a process of abstraction whereby we eliminate what is concrete and specific to leave the pure abstract concept of blueness.  [1] But what is this idea exactly? As Berkeley pointed out, it seems to be “all and none of these at once”, and hence inconsistent.  [2] Certainly we can have no mental image of such an abstract quality, only of its more specific types. Nor do we ever see an object as simply blue but only as a specific shade of blue. The alleged abstract idea seems elusive and problematic, a will o’ the wisp with no substantial content. In the sense in which I have an idea of cobalt blue I don’t have an idea of blue simpliciter. The idea looks like an invention, a piece of mythology. What is this process of abstraction that deletes everything specific to a shade and leaves only what is common to all shades? It is certainly not like separating in thought the wings and beak of a bird. But if there is no such general abstract idea, then the word “blue” cannot express such an idea. If so, it must be meaningless, since meaning consists in the expression of (existent) ideas. Obviously the argument generalizes to other general terms such as “triangle” and “cat”; indeed, it would seem to apply to a vast range of words. So large tracts of language must be declared meaningless. Or else we have to rethink our general account of what meaning is, perhaps questioning the very idea that ideas constitute meanings. That theory has produced a monster in the shape of abstract ideas, so perhaps it needs to be demolished and replaced by something different and better.

            This argument, which will be familiar, can be added to the family of arguments that purport to show that meaning is impossible (or must be radically rethought): Quine’s indeterminacy argument, Kripke’s skeptical paradox argument, rampant verificationism, and perhaps others. Thus: the extension of a predicate must be determinate if its meaning is to exist, but it is not; there has to be a right and wrong way to follow a rule if meaning is to be possible, but no fact can be found that constitutes following a rule correctly; sentences must be verifiably true if they are to be meaningful, but few sentences are verifiably true. The present argument contends that general terms must express abstract ideas if they are to be meaningful, but the notion of an abstract idea is incoherent. This is a serious argument: Berkeley clearly has a strong point against Locke. It is indeed difficult to make sense of abstract ideas: they are abstract to the point of non-existence. It is also difficult to make sense of abstract universals as mind-independent entities (as opposed to concrete universals): objects can exemplify shades of blue, and we can see these shades, but no object is simply blue and can be seen as such. This is an abstraction, not a perceptual given. Russell thought that we understand predicates by being acquainted with the universals they denote, but how does one become acquainted with the abstract universal blue? That alleged universal cannot come before the mind in its own right, but only as qualified by some specific shade of blue. We can’t have an idea of blueness as such because there is no such property as blueness as such—there is nothing to have an idea of. And yet we have the general term “blue”. Nor is it easy to confine the force of the argument to certain fragments of language: not only will it apply to a great many general terms; it will also generalize to singular terms. This is because singular reference is often or always mediated by general concepts, as in the description theory of names: predicates show up in the descriptions and they will be vulnerable to the same argument. Quine’s argument and Kripke’s argument are initially directed at a sub-class of expressions (“rabbit”, “plus”) and may or may not generalize to every expression of language, but they are enough to put the whole notion of meaning into question; similarly with the present argument—the argument seems to cut at the very essence of language, viz. generality. If “blue” is meaningless, something must be seriously wrong somewhere.

            Once the cogency of the argument has been acknowledged, the question is what to do about it. One response would be simply to accept it: there is no such thing as meaning; meaning is impossible. We just have to learn to live with that fact. But that has not been the usual response to such arguments (Quine being an exception): usually people have tried to save meaning by reconfiguring it somehow. Berkeley did just that by suggesting that while there are no abstract ideas there are specific ideas, and they can perform the work of generality by being used in a certain way (hence Berkeley is often cited as a forerunner to Wittgenstein). I won’t attempt to evaluate these efforts at preservation; I wish to note only how extreme the revision has to be once the argument from abstraction is accepted. For if “blue” fails to express a meaning-constituting idea, how can more specific terms have ideas as their meanings? Whatever kind of thing constitutes the meaning of “blue” will have to constitute meaning for “cobalt blue”, on pain of a semantic duality in language—an unacceptable theoretical bifurcation. Thus Berkeley’s theory explains the meaning of “blue” in terms of use, but explains the meaning of “cobalt blue” in terms of a specific correlated idea. But why not adopt a use theory across the board? Why not follow Wittgenstein all the way once the first step has been taken? Just abandon ideas altogether and replace them with uses. The meaning of a word is its use, not anything existing in the mind.

But this kind of theory is a radical repudiation of traditional ways of thinking. First, we have to give up the idea that understanding a word consists in associating a concept with it, i.e. a psychological state underlying the use of language. Second, the entire apparatus of reference and representation is called into question: for now we cannot say that meaning consists in intentionality, aboutness, reference. We used to say that understanding a word consists in having an idea of what it stands for or expresses—object or property—but we can no longer say that. To understand “blue” is not to know which universal (property, attribute) it stands for or expresses, but something else entirely—such as applying the word in a certain way. We lose the whole idea of language as a system of representation, replacing it with behavioral dispositions. So the problem with abstract ideas and general terms threatens to undermine fundamental assumptions about language and meaning. If we have no abstract idea (concept) of blue, then we cannot understand “blue” by invoking that idea; but then there is no psychological state that constitutes understanding—or none that involves the requisite intentionality. It must just be some sort of stimulus-response system that never reaches beyond language itself—a kind of syntactic machine without semantic interpretation (it’s not about anything, such as being blue). This is much more radical than a “skeptical solution” in terms of assertion conditions, because that at least retains much of the old apparatus. But this kind of solution will be vulnerable to the original argument from abstraction, since general terms will appear in the assertion conditions (“Assert ‘that is blue’ when something looks blue to you”). Once we abandon the idea of ideas (concepts, thoughts, mental representations) as the basis of understanding, we find ourselves in strange new territory. So it isn’t going to be easy to respond constructively to the skeptical argument from abstraction. As Locke recognized, his whole theory of language depends on the viability of the notion of abstract ideas; it was left to Berkeley to point out that this notion is riddled with difficulty. There is (as Kripke would say) no fact of having an abstract idea of blue; this is a fictitious notion. There isn’t even a fact of an object’s being blue: the facts consist of objects and specific shades of blue (as they consist of objects and specific types of triangle).  [3] We have the word“blue”, but we don’t have the abstract attribute of being blue or the abstract idea of blue. Hence the attraction of construing meaning as merely the use of a word, without regard for any quality denoted or expressed. There are words and their use, but there is nothing else, semantically speaking.

            It is worth noting that two standard ways of treating “blue” that try to preserve the basic form of the old apparatus don’t apply. The first is to switch from abstract ideas to dispositions to assent: instead of saying a speaker has an abstract idea of blue we say that she has a disposition to assent to “blue” in the presence of blue objects. But the trouble is that there is no such disposition, since perceptual objects are not blue simpliciter—they are navy blue or cobalt blue or some such. There is no stimulus of being abstractly blue; there are only stimuli corresponding to specific concrete qualities. A disposition to assent to “blue” in the presence of these variously hued objects would yield only a disjunctive concept not the unitary concept we think we possess. It is the same with the notion of a capacity: what is the capacity to recognize blue things but the capacity to recognize shades of blue (ditto for triangles)? Even Locke agreed that there is no objective property of being blue, only an abstract ideaof blue–so there can be no disposition to respond to the instantiation of such a property. A dispositional theory of abstract concepts thus does not evade the fundamental difficulty.

            The second way is to stretch the concept of family resemblance and declare that blue is a family resemblance concept like game. Just as there are many kinds of game with no common connecting thread, so there are many kinds of blue with no common connecting thread (ditto triangles)—hence no concept of that thread. Whatever one might think of family resemblance as an account of the concept game, it surely looks singularly unapt for the concept blue. Isn’t this a paradigm of a non-family resemblance concept? There is something common to all blue things—their blueness. Similarly, all triangles have three sides, despite there being several types of triangle (scalene, isosceles, equilateral). To stretch the concept of family resemblance to cover these cases in an effort to solve the problem of abstraction smacks of desperation; you might as well say that every concept is a family resemblance concept. The problem is that “blue” has an unequivocal meaning, but there exists no idea corresponding to that meaning: all the ideas in the vicinity are too specific. It looks prima facie as if the meaning of general terms requires abstract ideas, but it turns out there are no abstract ideas, so meaning is in jeopardy. In order to save meaning we are compelled to contemplate radical departures—such as abandoning intentionality as constitutive of meaning. It isn’t just a minor hitch.

            My aim in this essay is not to resolve this issue, but merely to add it to the list of other skeptical arguments concerning meaning. It is not the normative nature of meaning that is causing the problem, as with Kripke’s skeptical paradox, nor the extension-selecting nature of meaning, as with Quine’s indeterminacy thesis; it is the abstract nature of meaning that is causing trouble, its distance from concrete reality, both mental and non-mental. Meaning is too abstract to be possible, too far removed from actual human psychology (perhaps from any psychology), as well as from concrete physical reality. There is nothing in reality for it to be. It calls for feats of abstraction that are beyond the powers of man or nature.  [4]

 

  [1] The locus classicus is Locke’s An Essay Concerning Human Understanding, Book III, Chapter III: “Of General Terms”.  

  [2] Berkeley, Principles of Human Knowledge, section 13. He is discussing the case of “triangle”, arguing that the abstract idea of a triangle is impossible: it must comprehend all kinds of triangles, but it cannot be any specific one of them.

  [3] We can of course say that different shades of blue or different types of triangle are similar to each other, but if we ask what the respect of similarity is we will fall back on such terms as “blue” and “triangle”—the very general terms that cause the problem. How do we understand such terms—in virtue of what fact? Is there any fact?

  [4] Accordingly we find nominalist theories of meaning—theories that treat meanings as nothing over and above words, perhaps as used in certain ways (I would describe Wittgenstein’s later reflections on meaning as nominalist in this sense). Put in these terms, the problem of the abstractness of meaning is solved (or dissolved) by denying that there is any mental correlate of a word—no idea or concept or mental representation. For any such correlate would immediately face the challenge of matching the abstractness of the word: and nothing we find in the mind is capable of mirroring the abstractness of “blue” or “triangle”. There are no such abstract mental facts, so meaning either does not exist or it consists in something other than a mental correlate (maybe just a use—whatever exactly that may be). We end up with an anti-mentalist theory of meaning as the only way to avoid meaning nihilism.   

Share

The Word “Thing”

                                                The Word “Thing”

 

 

In his Ethics Spinoza has a curious passage concerning the common word “thing”: “But not to omit anything it is necessary to know, I shall briefly add something about the causes from which the terms called Transcendental have had their origin—I mean terms like Being, Thing, and Something. These terms arise from the fact that the human body, being limited, is capable of forming distinctly only a certain number of images at the same time (I have explained what an image is in P17S). If that number is exceeded, the images will begin to be confused, and if the number of images the body is capable of forming distinctly in itself is greatly exceeded, they will all be completely confused with one another…. But when the images in the body are completely confused, the mind also will imagine all the bodies confusedly, without any distinction, and comprehend them as if under one attribute, namely, under the attribute of Being, Thing, and so forth… These terms signify ideas that are confused in the highest degree.” (P40) Spinoza evidently believes that the word “thing”, as philosophers employ it, is a defective word expressing a defective concept, because it signifies no determinate kind or sort or type. Presumably, then, it should be banned from serious discourse and certainly not relied upon in theoretical contexts. What it means is obscure at best.

            And yet the word finds its way into philosophical discourse at the highest level, as if it cannot be done without. Let me mention three disparate examples. First, there is Descartes’ use of the phrase “thinking thing”: not “thinking subject” or “thinking self” or even “thinking substance”—as if Descartes is reluctant to say anything positive about what the thing that thinks is. We may not know what the nature of this thing is, but at least we know that it thinks—whatever precisely “it” is. Second, there is Kant’s use of “thing-in-itself”: this thing is also an I-know-not-what, though we know that there must be such a thing. Again, Kant doesn’t want to enter into any conjectures about what manner of thing this thing-in-itself might be—even to call it an “object” might be overreaching—so he sticks to the indeterminate term “thing”. Third, we have Wittgenstein’s well-known remark: “It is not a something, but not a nothing either!” (Philosophical investigations, 304) This sentence could be paraphrased as, “It is not a thing, but it is not not a thing either!” Once again, the thought is that we have run out of descriptive resources and are forced back to the non-descript word “thing”: we have reached the limit of language, the point at which all description fails us. We use the word “thing” when we can find nothing better to say—nothing more informative, more definite.

            Here is how the OED defines “thing”: “an object that one need not, cannot, or does not wish to give a specific name to”. The word “object” jars here because “thing” seems broader than “object”, but for our purposes the definition captures the philosophical use of “thing” nicely. We use the word “thing” when we cannot use a more specific term—when we have no name or description for what we are referring to. Thus the word functions as an expression of ignorance: we use it when we need to be maximally general, vague, or noncommittal. We use it in contexts of epistemic limitation. Consider Hamlet’s famous declaration, “There are more things in heaven and earth, Horatio, than are dreamt of in our philosophy”: here the schematic “things” expresses the lack of knowledge to which Hamlet is drawing our attention—things whose nature and type we know nothing of. Hamlet is alluding to what lies beyond our conceptual scheme, even our imagination (“dreamt of”)—to the unknown or unknowable.

This, then, suggests an answer to the question troubling Spinoza: What are these schematic bloodless words for? Are they just confusions brought about by the way we form concepts, the mind misfiring? The answer, I suggest, is that we have these words because we recognize that we are cognitively limited—we know that we don’t know. We want to be able to speak about the unknown and unknowable, and to do that we need words that don’t commit us as to type—words without descriptive content. The concepts they express are not confused and pointless but exactly tailored to their purpose, namely to permit reference to what we do not and maybe cannot conceive. We might call them “horizon words”. There are “things” out there, over the horizon, which we don’t know how to describe or classify or get our minds around. To put it in philosopher’s jargon, we have the word “thing” because we are realists—because we don’t want to limit reality to what we know or could know.  [1]

            The point applies sharply to quantifier words like “something”, “everything”, and “nothing”. We want these expressions to have maximum generality, stretching beyond what we can know or conceive or dream. Thus “everything” means “every…single…thing”, without restriction. We need a word like “thing” if we are going to encompass every last thing (!) in reality. Restricted quantifiers like “every man” or “every elementary particle” won’t cut it. The word “thing” exists because language recognizes its own limitations—it refers to what it cannot refer to. That is, it refers to things it is incapable of describing. The concept thing is part of our conceptual scheme because our conceptual scheme is limited; it concedes the possible blankness of our thought concerning reality. We speak brightly of dogs, mountains, numbers, and electrons—things we can conceive, at least in some measure—but we also speak darkly of things that we cannot conceive but which might yet exist. Hence we find no tension in the phrase “unknowable things”. If we didn’t believe in such possibilities, we wouldn’t need the word “ thing” in its current meaning; there would be, so far as we are concerned, nothing beyond dogs, mountains, numbers, and electrons—nothing over the horizon.

            Does anything fail to be a thing? Frege thought that concepts are not objects, so not everything is an object: but are concepts things? Surely they are, for they are real (for Frege).  Are Platonic universals things? What else could they be (if they exist)–nothings? Is God a thing? He is certainly something, so he must be a thing of some sort—some…thing. Are events things, or thoughts, or numbers? Check, check, and check. Every last thing is a thing—it’s what anything has to be. The word “thing” applies to absolutely everything, trivially so; and it enables us to speak of what we cannot know or conceive or imagine. Everything we know is clearly a thing, but so is everything we don’t know. As Spinoza suggests, “thing” belongs with “being” in that both words aim to encompass the whole kit and caboodle (notice how language strains at its edges). It is the word we use in philosophical contexts when we mean to abstract away from the known part of reality and reach out to the very limits of reality. So far from being confused or pointless, it is a word with a clear and definite purpose—to make sure nothing (no…thing) is excluded. To speak of “things” is to speak of what is, immanently or transcendentally (to echo Spinoza). It is a very good word.

 

Colin McGinn

  [1] There is an old horror film called simply The Thing and the point is that its hapless victims know not what it is that is decimating them. And then there is the timeless cliché, “Someone…or some thing”.

Share

A New Riddle of Induction

A New Riddle of Induction

 

Suppose that tomorrow the sun does not rise, bread does not nourish, and swans are blue. Does that show that nature is not uniform, that the past is not projectable to the future, and that induction has broken down? Can we conclude that what we observe tomorrow does not resemble the past? Not unless we know the past—unless we know that the sun used to rise every day, that bread used to nourish, and that previous swans were white. But memory is fallible and vulnerable to skepticism. If we are wrong about the past in these respects, then when we suppose that the future diverges from the past, we are mistaken—actually the future does resemble the past (blue swans etc). So unless we have an answer to skepticism about the past we cannot infer from an apparent breakdown in the uniformity of nature that there is a real breakdown.  [1] Given that we have no such answer, we cannot know that the future fails to resemble the past. If bread never actually nourished in the past, then its failure to nourish tomorrow is perfectly uniform and projectable from its past properties. So it is not just that we can’t establish that nature is uniform; we also can’t establish that it is not uniform. We can’t describe a situation in which we discover that the previous laws of nature have broken down, or were not laws after all, for it is always possible that we are wrong about how things were in the past. This makes the skeptical problem of induction ever harder. We can know that our predictions have been falsified, but it doesn’t follow that we can know that the future does not resemble the past, since we could be wrong about the past. Even a total failure in all our inductive predictions would not establish that the future diverges from the past. Nature might be completely uniform and yet appear to us not to be. We can’t know that nature will continue the same into the future and we can’t know that it has not continued the same.

 

  [1] There are two sources of potential error about the past: first, we might just be wrong that bread ever nourished (we have false memories); second, we might have made an inductive error about bread in the past, inferring that all past bread nourishes from the limited sample of bread we have encountered (maybe the uneaten bread was poisonous). If we make the latter error, our observation tomorrow that some bread is poisonous actually follows the way bread was in the past, so there is no breakdown of uniformity. 

Share

Plurality Skepticism

                                               

 

 

Plurality Skepticism

 

 

The skeptic characteristically maintains that we have a tendency to believe in too many things. We believe in other minds (not just our own) and we believe in external objects existing independently of our mental states. Strictly, we should believe in our own mind and nothing else. There are (or there may be) fewer things in heaven and earth than are dreamt of in our philosophy. Solipsism-of-the-moment is the only safe position, which cuts the world down to one thing. Our problem is that we overestimate the contents of reality, postulating things in which we have no right to believe. We are like polytheists who should really be monotheists, or theists who should really be agnostics or atheists. We are prone to “false positives”—assuming things to exist that we have no good reason to believe exist. Skepticism thus seeks to reduce our range of beliefs in things—we must subtract, eliminate, deny.

            But there is another form of skepticism, which is far less familiar: this kind says that we have a tendency to underestimate reality. We tend to assume that there are not various things, but there may in fact be such things. There may be more things in reality than we are inclined to suppose (hence Hamlet’s famous line). I call this “plurality skepticism”. The clearest example is plurality skepticism about our knowledge of other minds: not only may there be minds other than our own in the form of human and animal minds, there may also be plant minds, bacteria minds, and even molecule minds. That is, this is the skeptical hypothesis we need to rule out if we are to maintain our usual restrictive ontology of minds; and since it cannot be ruled out, the plurality hypothesis may be true. The radical plurality skeptic about other minds insists that we cannot rule out the hypothesis that there are other minds everywhere: in our own bodies, in trees, even in atoms. This skeptic will point out that the lack of behavior indicating the presence of a mind does not logically entail that there is no mind present, so we cannot cite the lack of behavioral evidence as proving that the hypothesis of a plurality of minds is false. For all we know, there are vastly more minds than we suppose. That is, we are guilty of “false negatives”—disbelieving in things that actually exist. The skeptic is not saying that we should believe in these things, only that we have no right to rule them out—so we should be agnostic.

Maximum strength skepticism says that we should both doubt the existence of minds where we typically don’t doubt them and doubt the absence of mind where we typically assume such absence. Indeed, it may be that other human beings and animals have no minds but fungi (and only fungi) have minds: this is what the consistent radical skeptic about other minds will contend—that we may be completely wrong in our habitual assumptions about the distribution of minds in nature. We may be guilty of false positives and false negatives. How can you rule out the hypothesis that only fungi and you have a mind? Are you certain that this is not the case?

            We find the same structure with respect to skepticism about our knowledge of the external world: might there not be many worlds that we fail to recognize? The traditional skeptic argues that we are rash to believe in a world outside of our own mind, but the plurality skeptic argues that we are rash to assume that there is only onesuch world—the one that seems to us to exist. Maybe there are many external worlds that we fail to recognize (as some physicists today actually maintain)—maybe we vastly underestimate the content of physical reality. There is not just the ordinary world of tables, chairs, mountains, and galaxies, but also other types of object of which we have no knowledge—objects completely hidden from us (“dark universes”). Indeed, it may be that our ordinary external world does not exist (we are brains in a vat) but that other external worlds do exist: what we experience is an illusion, but there are other strange worlds out there that exist instead. That is the truly radical form of external world skepticism: it may be that the world we think we know does not exist but that other worlds do exist. These worlds are not merely possible worlds, but actual worlds that we fail to experience. The skeptic wants to know why we don’t take this possibility more seriously. More strongly, he claims that his skeptical hypotheses are as likely to be true as the view we habitually accept. He thinks we have a dogmatically narrow conception of reality. There may be all sorts of objects out there that we fail to recognize, and it may be that those in which we believe fail to exist at all. Our epistemic failings are thus multiple and grievous, not limited to postulating one world that may not exist. We should be agnostic about the full range of possibilities.

            The plurality skeptic may also point out that we have a history of underestimating reality: we tend to assume there are fewer minds and physical objects than there really are. Sometimes we overestimate, as with witches and gods, but more often we underestimate. We used to restrict minds to humans, with one mind each, and even doubted the existence of minds in humans we found alien; then we acknowledged minds in other animals and all humans, as well as accepting unconscious minds within ourselves. We have grown steadily more expansive with the concept of mind. Likewise, we once limited the external world to objects perceptible by the senses, only gradually accepting microscopic objects (organisms and particles) and remote celestial objects, and now invisible forces, dark matter, and so on. We erred on the side of epistemic stinginess, which is not surprising given our limited senses, with occasional lapses into excessive ontological largesse. So the plurality skeptic can reasonably suggest that we might still be committing false negatives: that there are likely to be many more minds and external worlds than we customarily suppose. Our confidence that we have accounted for all realities might well be misplaced. We may be committing more errors of exclusion than of inclusion. At any rate, radical skeptical hypotheses postulating pluralities of minds and worlds cannot be ruled out.

            The skeptical problems of the external world and other minds are generally presented as problems of epistemic over-commitment, with the emphasis placed on the possibility of false positives—there might be less to reality than we suppose. But it is equally important, in presenting the full skeptical challenge, to reckon with the problem of false negatives—the possibility that there might be much more to reality than we suppose. We don’t know if there is no external world or many (perhaps infinitely many) external worlds or just the one world that we normally take for granted; and we don’t know if there is just one mind (one’s own) or hugely many (perhaps infinitely many) minds, spread everywhere and in the most unlikely places, or just the range of minds we normally take for granted. We are really very ignorant of what is so and what is not so—that is the skeptic’s message in its full generality. We may be making enormous errors of both commission and omission. We might live in a very lonely world or in a very crowded world—we simply cannot know.

 

Share