Inverted Positivism

 

 

Inverted Positivism

 

I wish to introduce you to the work of an obscure Austrian philosopher. His name is Otto Otto and he lives in the suburbs of Vienna.  [1] He belongs to a group called the Vienna Oval by facetious analogy with the better-known Vienna Circle. Otto (you can take this to be his first or last name according to preference) is a positivist strict and pure, old school to the core. He accepts the verifiability criterion of meaning without dilution or compromise: every meaningful statement must be verifiable. He differs from other positivists, however, in two particulars: he doesn’t think that ordinary empirical statements are verifiable, and he does think that a priori truths are. In particular, he holds that analytic truths are the paradigm of verifiability. He thus maintains that analytic and other a prioristatements are straightforwardly meaningful while empirical statements are not. His reason for denying the verifiability of empirical statements is not eccentric: it lies in the power of skepticism. Skepticism teaches us that our ordinary statements of science and common sense are not rationally justifiable, i.e. not verifiable. According to the verifiability principle, then, they are not meaningful. Since we could be brains in a vat, we can’t justify statements about the external world, which means they are not verifiable; and hence they cannot be meaningful. However, we can justify analytic statements, because the skeptic cannot cast doubt on our acceptance of them: we know for sure, for example, that bachelors are unmarried males. So analytic statements are meaningful, but empirical statements are not. That is Otto’s considered position and he sees no reason to deviate from it.

            Now Professor Otto is not an unreasonable man: he is aware that his position might strike some as extreme, even perverse. For how can it be that science and common sense are literally meaningless? Here he is prepared to hedge a bit: they may be agreed to have a kind of secondary semantic status. Conventional positivists make this kind of compromise all the time: they accept that a priori statements are meaningful without being empirically verifiable, holding them to be tautologies devoid of real content; and they also recognize the existence of meaningful ethical statements. They accordingly speak of “cognitive meaning”, contrasting it with lesser kinds of meaning; they operate, in effect, with a semantic caste system. Likewise Otto and his comrades accept that empirical statements have a kind of second-class semantic status: they have a use, a role on communication, even if they lack meaning proper. They lack what Otto is pleased to call logical meaning or rational meaning or epistemic meaning, but they do have pragmatic meaning—meaning in the vernacular non-rigorous sense. They have the same status as tautologies in the rival positivist worldview: tautologies are not meaningful in the sense of being informative and fact stating, but only in the lesser sense that they are grammatically well-formed and composed of meaningful elements. Likewise Otto and his associates accept that empirical statements are grammatical and composed of meaningful elements, but they deny that such statements have the kind of serious substantial meaning possessed by statements you can rationally accept as true. They wonder what the point is of statements that cannot be used to express knowledge, as empirical statements cannot (because of skepticism). True, such statements are not literal nonsense, either by grammar or lexicon, but they can’t match a priori statements for their ability to express ascertainable knowledge. The latter statements are genuinely cognitive in the sense of being knowledge expressing, while empirical statements exist in a limbo of uncertainty. Otto privately condemns empirical statements as sheer nonsense, in his strict and pure sense, but he publicly concedes that they have a kind of degenerate meaning—just as the Circle positivists do the same for analytic statements and ethical statements. What Otto can’t fathom is why these putative positivists believe that empirical statements are meaningful and yet are not verifiable—given that they accept the verifiability theory of meaning. Don’t they see that no empirical statement can be established as true, or even asserted in preference to its negation, given that the skeptic is undeniably right? They fail to understand that a priori statements are the only kind that allow of conclusive verification, and hence qualify as meaningful. Even if they are mere tautologies—which Otto strongly contests—they are at least verifiable tautologies: you can at least know them to be true! You know them to be true by the exercise of reason and knowledge of meaning, whereas the senses can never deliver skepticism-proof knowledge. The problem of induction by itself shows that laws of nature cannot ever be known, so such statements are not verifiable, and hence not meaningful. On the other hand, we can know with certainty that bachelors are unmarried males and that 2 plus 2 equals 4. Thus the principle of verifiability shows that only a priori statements are meaningful in the gold-standard sense, with empirical statements trailing somewhere in the semantic dust. They are meaningful only in the sense that “Colorless green ideas sleep furiously” is meaningful, i.e. they are grammatical and composed of meaningful elements. Otto’s position is rather like the position that would have been taken by Popper had he been interested in criteria of meaningfulness: empirical statements can never be verified (though they may be falsified), and so cannot be strictly meaningful. Otto and friends adopt the simple view that meaning requires the possibility of knowledge, and only a priori statements can really be known to be true. They regard the rival band of positivists as sloppy weak-kneed thinkers who refuse to accept the problem of empirical knowledge—don’t they see that no empirical statement has ever been verified? Authentic positivism thus requires us to accept that empirical discourse is strictly meaningless save in a second-class by-courtesy-only sense. It is meaningful only in the way ethical discourse is meaningful, i.e. by dint of grammatical correctness and pragmatic utility.

            What view does Otto’s school of positivism take of metaphysical discourse? One might think they would be tolerant of it since it purports to be a priori, but actually they are as intolerant of it as their rivals in central Vienna. The reason is simple: such statements are never rationally justifiable. No metaphysician can ever establish the truth of his assertions: there is no method of acquiring metaphysical knowledge. It isn’t the lack of empirical content that is the problem—that applies equally to mathematical knowledge—but the problem of not having any effective method of delivering knowledge. No proof procedures and no simple unfolding of meaning, just endless wrangling and futile dispute. So metaphysics is as meaningless as the other positivists maintain but for a different reason. Empirical justification is neither here nor there; what matters is that there be some method for finding out the truth. As to ethics, Otto is ambivalent: he is inclined to regard it as a priori, and he accepts that ethical reasoning is rational, but he is disturbed at the lack of consensus about ethical questions. He is apt to call ethical statements “quasi-meaningful”: they have emotive meaning, to be sure, and they permit rational inference, but they lack the kind of certainty we find in mathematics, logic, and analytic truth. Ethical statements are not as meaningful as statements in these areas, though they are a lot more meaningful than statements of physics, say, with its unverifiable induction-based statements of natural law. Nothing meaningful is unjustifiable, so ethics squeaks in by comparison with science: after all, it has a strong a priori component. But science is stuck in unverifiable limbo: for none of it can be proved. Here Otto is a Popperian: Hume was right about induction, and that means that scientific theories can never be rationally established (though they may be refuted). As Otto likes to say, science never expresses genuine propositions—things that can be true or false—though it can bandy around sentences that are instrumentally useful and are clearly grammatical.

            Otto suspects that the other positivists are unduly influenced by religion. They see that religion is not an area of rational inquiry in good standing, and that it is dubiously meaningful, so they naturally seek to ban it. But they wrongly locate the central defect of religion: it isn’t that it lacks empirical credentials but that it lacks any procedure for establishing its claims. It doesn’t have the methodological clarity of the analytic, the logical, and the mathematical. In fact, it does have empirical criteria of justification; it is just that these criteria tend to undermine its truth (all those alleged miracles never pan out empirically). The reason it is not meaningful is that its claims are not susceptible to rational demonstration—not rationally verifiable. Faith is not rational demonstration, so faith cannot supply meaning. The other positivists wrongly contrast religion with science, thinking that science supplies the paradigm of the meaningful for the true positivist; but science is not strictly meaningful by correct positivist standards. Rather, religion and science are both condemned to semantic destitution, according to the proper form of positivism—the kind that links meaning with rational provability. The problem with religious claims is just that there is no rational way to demonstrate them. If they were analytic everything would be fine (the ontological argument was a valiant effort in that direction), but they clearly are not—they don’t just spell out what the word “God” means. Nor are they mathematical in nature. So there is no way to justify them by rational criteria. The positivists were right to find a tight connection between meaning and knowledge, but they wrongly located this connection in empirical knowledge—of which there is no such thing. They were logical empiricists where they should have been logical rationalists: pure reason can establish the truth of propositions, and hence guarantee meaning, but the senses combined with induction are impotent to establish anything, so they cannot be the source of meaning. Skepticism disproves classical positivism, but it leaves Otto’s version of positivism untouched, or so he contends.  [2]

 

  [1] Why the double “Otto”? The palindromic possibilities, the economy (two letters, a whole name), the contempt for custom: and why not? Nabokov cannot be the only one.

  [2] Did I mention that Otto Otto is a mathematician by training and also an accomplished logician? He is also fond of compiling synonyms. Empirical science leaves him cold because of its lack of formal rigor and its inconclusive methods. This may have something to do with his insistence that real meaning lies in the a priori sciences; he is certainly snooty about the mathematical capabilities of members of the Vienna Circle (those mathematical illiterates, as he refers to them). In fact, he lumps them together with the metaphysicians in their shared lack of methodological scruples. Empirical science is far too much like metaphysics, in his book.

Share

Studying the Brain

 

Studying the Brain

 

Brain studies have proceeded apace since that clump of grey tissue in our heads was tapped as the basis of mind. First it was inspected with the naked eye, prodded and poked; then dissected and anatomized; then stained and examined under a microscope; then electrically recorded, grossly and minutely; and latterly viewed by means of MRI machines and the like. With these instruments we have developed quite a full picture of the brain’s architecture, chemistry, and mechanics—its parts, constituents, and processes. The role of human eyes has been conspicuous in this effort: by using our eyes we have learned a lot about the brain (researchers don’t tend to use smell and taste or touch and hearing). Indeed, we can think of the eye as just another instrument, along with the microscope and the electrode: the eye is the main instrument the brain uses to study itself (with its cornea, retina, fovea, etc.). Brain science is methodologically ocular. It is the eye that chiefly reveals the brain as we now know it. Even when a microscope is used the human eye is still at the epistemic center.    [1] It would be generally agreed that the same methods will continue to be used to accumulate knowledge of the brain—and that no other method would be workable, or even desirable. The brain reveals itself exclusively to these methods: third-person observation, assorted scientific instruments, and recordings of neural activity. In particular, visual perception is the best (and only) route to knowledge of the brain. For what other comparable method is available?

            But this ignores another possible avenue of discovery: introspection, i.e. knowing oneself from the inside. You might reply: but introspection reveals the mind not the brain, so it is of no use as a means of learning about the brain. This, however, assumes that the mind is not an attribute of the brain—possibly being located in a separate substance altogether. But that is wrong: the mind—consciousness—is an aspect of the brain. The brain—a physical object in space—is the bearer of mental states, there being nothing else to be their bearer. Consciousness is a brain state. This shouldn’t be controversial when correctly understood: it does not mean that consciousness is a physical state of neurons just like the physical states they are known to have by using the observational method. It may well be a state of an entirely different kind, as private and subjective as any arch-dualist might wish; but it is still a state of neurons. Neurons have these states as they have the states discovered by the observational methods described above. Electrical activity is an aspect of the brain, ultimately its neurons; conscious activity is likewise as aspect of the brain, and hence its neural constituents. This means that in knowing about the mind by means of introspection we thereby come to know about the brain: we are learning about the brain by introspecting the mind. Knowledge of mind is knowledge of brain. Note that this kind of first-person knowledge is relatively primitive methodologically: there are no microscopes, electrodes, and MRI machines here. We are using our “inner eye” nakedly, without augmentation or upgrade; so we don’t have instruments that can enhance its resolution power or reveal the fine structure of what is revealed. Still, it provides genuine knowledge of the brain—knowledge that extends beyond what can be gleaned using the first method. So we really have two methods available for studying the brain: the perception-based method mainly centered on the eye, and the introspection-based method solely based on the “inner eye”. Put differently, the brain allows both methods to be used to investigate its nature, revealing different things to each.

            Let’s pause to interrogate the phrase “the inner eye”. Can it be taken literally? It might be thought unavoidably metaphorical since there is no eyeball in the brain responding to light given off by the mind. But that is a much too narrow interpretation if the concept of the visual. First, we have the concept of the mind’s eye, i.e. visual imagination: we see things with the mind’s eye as well as the body’s eye. This use of “see” is not metaphorical. Second, and connected, we use the concept of seeing far more widely than for the case of seeing by use of the eyes in the head: we are constantly seeing things that are not sensed by the eyes (a glance at the dictionary will assure you of this). We could call this “intellectual seeing” but even that does not do justice the variety of ways of seeing. In fact, in this capacious use of “see”, it appears to mean something like “perceive clearly” or “perceive as a totality”, which goes well beyond the deliverances of the body’s eyes. Indeed, the eyes don’t see at all if they don’t provide seeing in this wider sense: fragmentary and indistinct visual experience doesn’t count as genuine seeing—for nothing is perceived clearly and as a totality (“blooming, buzzing confusion”). Third, the involvement of visual cortex is relevant to the question of seeing: visual imagery involves activity in the visual (occipital) cortex, and it is not to be ruled out that employment of the “inner eye” might also recruit this part of the brain. It is noteworthy that talk of the “inner eye” (like talk of the “mind’s eye”) comes very naturally to us—we don’t likewise reach for the phrases “inner ear” or “inner touch”—and this may indicate an appreciation of the visual character of introspection. Nothing rules out the idea that introspection might have a visual character in the wide sense, and it is certainly not contrary to our habitual modes of speech. In fact, once you become accustomed to the idea, it becomes quite natural to regard introspection as a mode of seeing: it is certainly an example of perceiving things clearly and as a totality. I propose, then, to speak in this way: then we can say (what is agreeably neat) that we know about the brain by two kinds of seeing—seeing with the eyes embedded in the face and seeing with the inner eye. Both types of eye enable us to see aspects of the brain: physical and mental aspects, respectively. Thus we can study the brain by employing our two sorts of eye—outer and inner. Now we see it one way, now another, with different properties revealed, depending on the type of eye being used.

            The point of central interest to me at present is that neither eye tells the full story. Actually that understates it: each eye is systematically blind to what the other eye sees. The outer eye tells us nothing about the mental aspect of the brain, and the inner eye tells us nothing about the physical aspect of the brain. Each eye is perceptually closed to what the other eye is open to—rather as the human eye is closed to certain parts of the spectrum. The outer eye reveals quite a bit about the brain but stops short where the mind begins, while the inner eye is very revealing about mental aspects of the brain but cannot extend to its physical aspects. In fact, the situation is even more extreme than that: the inner eye doesn’t even give so much as a hint that the brain is a physical object located in the head, while the outer eye intimates nothing about the existence of consciousness. So far as each eye is concerned, the brain is nothing other than what it can reveal; but each offers only a very partial picture of the brain’s full reality. These two modes of seeing are thus remarkably tunnel-visioned.    [2] They don’t even acknowledge the existence of the aspect of the brain they are not geared to reveal. They are blind to each other’s domain in a very strong sense: constitutionally ignorant, dedicatedly blinkered. It is almost as if they want to deny that the brain has another aspect altogether—the one they can’t resonate to. And this means that, as means of studying the brain, they each suffer from what I shall call methodological closure. It could also be called methodological blindness or partiality or selectivity or divergence or tunnel vision or bias or ignorance, but I use the word “closure” to recall the phrase “cognitive closure”: the type of closure at work here derives from specifically methodological limitations rather than from limitations of the entire cognitive system. We could also call it “instrumental closure”, bearing in mind the point that methods involve instruments, whether natural or artificial. If the eye counts as an instrument of investigation, then we can say that the outer eye is instrumentally closed to mental aspects of the brain, while the inner eye is instrumentally closed to physical aspects of the brain. Both are useful instruments for gaining knowledge of some aspects of the brain but also useless for gaining knowledge of other aspects. They each suffer from a form of instrumental specialization: one is designed to get at physical aspects of the brain (inter alia) and the other is designed to get at mental aspects of the brain. The brain has each aspect just as objectively as it has the other, but our methods of knowing about it favor one aspect over the other, as a matter of their very structure. Outer eyes can vary in their scope and limits from organism to organism; well, our two sorts of eyes also vary in their scope and limits. Each has a blind spot where the other has clear vision. That’s just the way these eyes are made.

            Now this raises an intriguing question: are there any other aspects of the brain that these eyes don’t see? Is there anything about the brain that they are both blind to? Surely that is very possible, since not everything about the brain will be revealed by these perceptual systems: we may need theory and inference to discover properties hidden to these two modes of perception. Of course, the perceptual foundations are likely to constrain the scope of theory and inference, but we can suppose that new properties may lurk in the brain, which belong to neither sort of perceptual faculty. In fact, I think (and have argued) that this must be so, on pain of having no account of the connection between the two aspects of the brain. But I won’t repeat that now; my point is rather that the limitations of both ways of seeing suggest that we are highly confined in our methods for knowing about the brain. If both faculties are so sharply limited, what are the chances that the conjunction of them provides total coverage? Why should the capacities of these eyes, inner and outer, exhaust the objective reality of the brain? The brain might be brimming with properties to which our two eyes are completely blind. True, the two eyes do well within their respective domains of operation, yielding impressive knowledge of the brain, but the limitations of each, as revealed by the other, suggest a good deal of methodological closure—which is to say, ignorance. The two instruments have the limitations of all instruments, given their inbuilt scope and limits; and the brain might well (almost certainly does) have aspects to which they are both incapable of responding. It is left up to reason alone to try to discover what they decline to disclose, and pure reason can only go so far without a supply of primitive data to go on. What if the brain houses a completely distinct set of properties that are not hinted at by either the inner eye or the outer eye? Then we can expect methodological roadblock, instrument failure, and cognitive collapse. At any rate, the two eyes will not themselves be up to the task of disclosing what the brain contains.

            Putting that aside, what are the broader implications of seeing things this way? Phenomenology turns out to be brain science: Husserl was a brain scientist (as was Sartre and other phenomenologists). Moreover, phenomenology relies on the use of an inner eye to establish its results—it has a vision-based methodology. Psychology is also the study of the brain, even in its least neurological departments, since the mind simply is an aspect of the brain. Philosophy of mind is philosophy of the brain, for the same reason. By the same token, brain science is (partly) phenomenology, because consciousness is a property of the brain as such—not just correlated with it. Locke, Hume, and Kant (among others) were students of the brain, since “impressions” and “ideas” are states of the brain (mental states of the brain). We can even describe psychological studies as studies of the physiology of the brain, since “physiology” just means “the branch of biology concerned with the normal functions of living organisms and their parts” (OED). There is no requirement here that the functions be physical in nature (whatever quite that means): they could be irreducibly mental (and that word too has no clear meaning short of stipulation). The study of mind is a physiological study, conducted by means of the instruments available to us, both natural and artificial. The study of the brain thus includes a great many methods and disciplines, many of which are divorced from the methods adopted by what is conventionally called “brain science”. The so-called humanities are all brain science in the end—and there is nothing in the least reductive in saying that. It is just an acknowledgment that the brain is the de facto locus of the mind—where the mind happens, what bodily organ it derives from. The mind is not an aspect of the kidneys or the heart, and is not an aspect of an immaterial substance; it is an aspect of the organ we call the brain. To say that is not to reduce the mind but to expand the brain. This is why it is important to understand that introspective knowledge is knowledge of an aspect of the brain (in fact, several aspects). And it is also important to understand that the kind of knowledge contained in a neurophysiology textbook is only partialknowledge of the brain, omitting everything that can be learned about the brain by introspection—as well as by psychology and other studies of the human mind (history, literature, and science itself as a human institution). The brain is a multi-faceted thing. It is a mistake to let a single mode of access to the brain bias one’s general conception of the kind of thing the brain is. The brain is a far more remarkable entity than our untutored senses represent it as being.    [3]   

 

    [1] Compare astronomy: here too the investigator must rely on the eyes and optical instruments such as the telescope. Without these devices he or she would be hopelessly stymied. And it just so happens that distant objects interact with our eyes in such a way as to permit astronomical knowledge. Thus reality and method mesh, but only just.

    [2] Isn’t it true that any instrument, including the human sense organs, contains inbuilt biases that obstruct knowledge of anything outside their range? When you look at an object through a microscope, say, you no longer take in its macro features, focused as you are on its microstructure. If you did nothing but this your whole life, you might naturally come to think that things don’t have macro features. Similarly, telescopes elide facts of distance that are apparent to ordinary vision. The eye itself with its limited visual field gives an impression of non-existence to what lies outside of it (thus fueling idealism). All instruments of knowledge tend to suppress other knowledge, if only by occupying one’s attention: so they are not just partial but also oblivious of, and biased against, realities they can’t reveal. Don’t we have a strong impression when looking at a brain that it can’t contain consciousness? Our eyes give us a biased sense of the possibilities of the brain. From a different perspective it might seem perfectly natural for the brain to be the locus of consciousness.   

    [3] It would be different if every physical object could introspect itself, thus revealing an inner mental being as well as an outer physical being, but so far as we know this isn’t so (even for the staunchest panpsychist). The brain stands magnificently alone in its dual nature. My own suspicion is that the brain is wildly different in its objective nature from what we suspect, a complete anomaly of nature, given our standard modes of knowledge acquisition. A weak analogy: the earth is really very different from other planets in the solar system, which is what allows it to have life and mind on it. It is similar to them in many ways, but in crucial respects it is not—particularly as regards water content and temperature. Likewise, beehives and ant colonies are very different from mere aggregations of insects, exhibiting another level of organization altogether. But these are only lame analogies: the brain is special in a special way (and we don’t know what that way is). I tend to picture it in my imagination as having a completely different color from every other object.  

Share

Mental and Physical Events

 

Mental and Physical Events

 

Identity of properties is one thing; identity of particulars is another. Particulars can be identical without their properties all being identical. This is obvious: Superman is identical to Clark Kent but the property of being a flying man is not identical to the property of being a journalist. It is just that a single man has both these properties. This distinction has been thought useful in characterizing the relationship between the mind and the brain: hence the distinction between type identity theories and token identity theories. A type identity theory would say that the property of pain is identical to the property of C-fiber firing; a token identity theory would say that every particular instance of pain is identical with an instance of some kind of brain event or another—it need not always be C-fiber firing. The properties are different but they apply to the same particular (so a dualism of particulars is false). Thus we have “non-reductive materialism” and “anomalous monism”.  [1] Two sets of properties, one set of particulars: mental properties are not identical to physical properties, but every instance of the former is an instance of the latter. Here is an analogy for the token identity theory: each member of the set of soldiers has a certain rank—private, corporal, captain, colonel, etc.—and (we can suppose) a certain civilian occupation—lawyer, teacher, greengrocer, tailor, etc. For any instance of the former set of attributes, we can say that he or she is identical to someone with an attribute drawn from the latter set of attributes. That is, every soldier is token identical to someone of a certain civilian occupation—what has the former property also has the latter—but it would be quite wrong to identify military ranks and civilian occupations. The property of being a colonel is not identical to the property of being a lawyer, say. No one would be a “civilianist” about military ranks, holding that ranks are type identical with civilian occupations. Yet there are no soldiers who fail (we are supposing) to also have a (prior) civilian occupation: there is no dualism here of soldiers and non-military workers—as if each soldier has a kind of shadow civilian counterpart. No, he or she just is a particular civilian worker. The same people can have different attributes. Just so events can have both the attribute of being a pain and the attribute of being a C-fiber firing—without the attributes being identical. Thus we have a weaker version of materialism, one that avoids the problems encountered by type identity theories. We have materialism without reductionism.

            But do we? Is token identity theory really a form of materialism? Is it strong enough for that? And is it an adequate account of the relationship between the mind and the brain? Consider this question (which I have never seen asked): could any other type of mental event than pain have a token that is identical with a token of C-fiber firing? Could a tickle or a sensation of red be identical to an instance of C-fiber firing—in addition to instances of pain? Evidently, both colonels and corporals can be identical to people who are teachers in civilian life, so can both token pains and token tickles both be identical to tokens of the type C-fiber firing? Generally, is it possible for instances of every type of mental event to be identical to instances of the same physical type? Might every mental token be identical to a token of an identical physical type? Evidently, nothing in logic precludes this: token identity theory is consistent with total homogeneity at the physical level. There is nothing but C-fiber firing to “realize” every mental type. If that were so, then properties of the brain would have nothing to do with properties of the mind. In the same way civilian occupations have nothing to do with military ranks: these properties are determined by quite different factors (or could be). Nothing about being a tailor makes you into a colonel rather than a corporal, since tailors can be both (they can be trained to be both). Similarly, the mental type of a mental token is not determined by its physical type—it might not even be correlated with that type. Of course, if type identity were true, then we would have such determination, but not if we only have token identity. Token identity is entirely neutral on what determines mental properties: it could be acts of God or human convention or the color of your hair. Two people could have completely different mental lives while having all their physical properties in common—they could still be such that all their mental tokens are identical to physical tokens. A given mental type can be “multiply realized”, as we have been taught, but as a matter of logic it is also true that the same physical type can be “multiply manifested”, i.e. correlated with different mental types. At any rate, token identity in no way rules this out. We may then wonder whether it deserves the name “materialism”, since it is silent on what makes an organism have the mind it has. Mere token identity is a very weak relation, hardly qualifying as a less unpalatable version of classic type identity: for it says nothing about the nature or fixation of mental properties, i.e. what makes the mind the mind. What kind of mind you have has nothing to do with what kind of brain you have, according to token identity theory; the theory merely rules out the possibility that mental tokens float free of physical tokens—as soldiers might be thought (falsely) to float free of people with prior civilian lives. A monism of mental tokens allows for any old theory of mental types, or none. It is not a theory of mental types at all.

            This point may be conceded (the “uniform realization” point) but it may be suggested that we need to strengthen the token identity theory in a familiar way—by invoking supervenience. We can assert that mental types are strongly dependent on brain types, so that brain type entails mental type—physical properties fix mental properties. All right, let’s go ahead and assert that: the question then is what makes it true. The crucial point is that type identity explains this but token identity does not: if mental types are physical types, then of course you can’t have one without the other; but if they aren’t, the question is left hanging. Without type identity (or something close to it) supervenience looks like a mere stipulation devoid of rationale. It leaves open the question of why the dependence goes in one direction only: why doesn’t the mind also determine the brain? How can the properties be dependent one way but not the other? We certainly don’t have supervenience in the case of the soldiers, for the obvious reason that civilian occupation doesn’t determine military rank (or shape determine color, etc.), so why is the mental case different? No answer is given—a mere logical possibility is asserted. And surely we would want to say that there must be some internal relation between the brain and the mind—something about brain properties that underlies and explains supervenience. Absent a specification of what this might be, supervenience only gives us materialism by main force: it is what we need to wheel in to bulk up token identity into something looking more like classic materialism. More strongly put, unexplained supervenience is mere postulation not a theory of the mind-brain relation. It (purportedly) fills the gap left by abandoning type identity theory but without really supplying any filler. But token identity alone is hopelessly weak as a theory of the mind-brain relation. We may note that the asymmetry of dependence postulated by supervenience is also exaggerated at best: for there must be somedetermination from the mental to the physical, as a matter of hard necessity. For example, pain is necessarily linked to withdrawal behavior (or a disposition to it), but withdrawal behavior must be physically produced by the nervous system—so pain must fix some physical aspects of the organism feeling pain (viz. a physical withdrawal mechanism). It is not completely neutral about the condition of the body. Maybe C-fibers are in fact the only ones that can figure in a causal sequence that culminates in withdrawing a limb from the painful stimulus, even though this fact is not transparent to us; in that case there is partial supervenience from the mental to the physical.  [2] And then we will have mutual dependence between mental and physical types, which encourages a type identity theory after all. It turns out that token identity plus supervenience is not sufficient to capture the nature of the mind-brain relation, and that we must move in a more type-committed direction. So token identity alone is no good, and one-way supervenience is no good either; we can’t avoid assuming something like type identity (possibly type composition). Maybe the brain descriptions (and the mental descriptions too) have to go beyond our commonsense categories, but we can’t avoid assuming a close relation of types—and identity seems the only clear way to go. Of course, this will lead to the classic objections to type identity theory, but that is just the old familiar mind-body problem making itself felt. My point is that the attempt to circumvent the objections to type identity by retreating to token identity (with or without supervenience) is doomed, because (a) that is not really a form of materialism and (b) we evidently need to postulate a stronger relation between mind and brain than can be supplied by those theories.

            We tacitly assume that physical types play some role in fixing mental types when we intuitively rule out the possibility that the same physical type may underlie different mental types in instances of token identity. We don’t even consider the possibility that token pains and tickles and sensations of red might all be identical to physical tokens of C-fiber firing, even though that is not logically precluded by token identity theories—because we assume that the mental is fixed in some way by the physical (hence the appeal of type identity theories). But mere token identity is quite compatible with homogeneity at the physical level combined with heterogeneity at the mental level (the analogue of soldiers of different military ranks all coming from people of the same civilian occupation).  [3] But that is not a satisfactory account of the relation between mental and physical events; and supervenience as commonly understood is not sufficient to remedy the problem. Type identity seems like the only way to go—with all the problems that are attendant upon that. Token identity theory is thus an inadequate refuge from those problems. It is too much like saying that all mental events fall under physical descriptions like “occurring n miles from the equator”: that is no doubt true, but it is not a form of materialism. Many types of mental token could correspond to the same such description (all those at a certain latitude, say), but that is not a theory of what makes mental types the types they are. The brain needs to be brought into closer proximity to the mind than that, but token identity alone is not equipped to do it. Type identity, however, is.  [4]

 

C

  [1] Davidson’s “Mental Events” serves as a classic expression of this type of theory.

  [2] Valves are similar: they can be made of very different materials, but they all require a physical mechanism consisting of opening and closing. Likewise all tables require a flat raised stable surface, which are physical features, despite varying widely in physical composition. Not all physical facts are compositional facts.

  [3] If we were to observe this situation obtaining in the brain, we would surely conclude that mind and brain have little to do with each other. The same mental type can co-occur with different physical types (“multiple realization”) and the same physical type can co-occur with different mental types (“uniform realization”). The fact that each mental token is identical with some physical token would not alter our opinion. In fact, we observe quite strong correlations and these lead us, reasonably enough, to postulate type identities; but this is not part of the logic of token identity theories. So token identity theory cannot be construed as a relaxing of type identity theory that preserves its spirit while avoiding its difficulties.  

  [4] The type of type identity might be very different from any currently envisaged or even imaginable by us: it might have to be expressed using concepts quite alien to concepts we now use to think about mind and brain. In particular, C-fiber firing may be a far more exotic thing (by our standards) than we realize; it may have hidden depths. Type identity and mysterianism are not incompatible doctrines: the brain might have properties currently mysterious to us that are type identical with mental properties.

Share

Labile Fear

 

 

Labile Fear

 

Fear is a besetting emotion. It is with us always. It is also a universal feature of animal life. Fear motivates like no other emotion. It is unpleasant, intense, and disruptive. We do well to understand it. The aspect of fear I want to focus on is its extremely labile character (OED: “liable to change, easily altered”). It is labile along two dimensions: abruptness of change, and flexibility of object. You can feel an intense and overwhelming fear at a particular time and instantly cease to feel it if circumstances suddenly change: that is, if your beliefs change (beliefs are also highly labile). This is biologically intelligible: circumstances can change rapidly and we need to update our fear emotions accordingly. You thought you were about to be attacked by a bear but you suddenly realize it is only a bush: the emotion evaporates in the instant, with barely an echo remaining. Similarly the onset can be sudden, as when what looks like a fallen branch turns out to be a rattlesnake. Again, this is evolutionarily predictable. Other emotions have more lag time, more inertia, especially attachment and love: they start more gradually, build up, and take time to dissipate. Fear is like pain: it can abruptly end and begin—and pain is one of the things we most fear (death being the other thing). Love isn’t like a sensation at all in that it has no such well-defined temporal boundaries; the closest thing to it are sensations of pleasure, which may take time to take hold and time to dissolve. But fear is highly responsive to changing circumstances—hyper-labile. It is nimble, belief-dependent, and easily triggered and terminated. Phobias are a case in point: the fear can be intense in the presence of the feared object, but it quickly subsides once the object is removed. The phobic subject is not continuously assailed by fear of the phobic object if it is kept at a safe distance, but the onset is sudden when confronted by it. The point of fear is to be switched on quickly when the occasion demands and not to hang around once the danger has passed or receded.

            But it is the second labile aspect of fear that really makes it stand out. Here again there are two expressions of this: variability of object and object redirection. You can be afraid of almost anything and of nearly everything; fear is not choosy. There are people who are deathly afraid of celery or butterflies; many people are terrified of non-existent objects; the unknown inspires general dread. We are all afraid of death, disease, poverty, loneliness, failure, and rejection. I am not at all happy with heights. Again, love is far choosier: you can’t love just anything. This feature of fear seems rather counter-evolutionary: why install such an undiscriminating fight-or-flight response? Where is the biological payoff in celery phobia? Perhaps this is an overshoot of the need for flexibility of object; it is certainly puzzling (hence phobias are regarded as irrational). Freud had elaborate theories about why certain phobias exist (celery as a symbol of something genuinely dangerous). But the second aspect is particularly peculiar (in both senses)—what I called object redirection. This is a curious psychological phenomenon, though evidently common enough. I mean the tendency of fear to shift its object from one thing to another for obscure reasons. Suppose you are afraid of becoming unemployed: you then find yourself afraid of individuals of a certain ethnic group. Your fear has shifted from your own joblessness to certain people. Or you fear the police and find yourself afraid of anyone in uniform. You might recognize this as irrational but your fear mechanism has other ideas.  [1]Trauma works like this: it spreads fear around indiscriminately. Thus you are easily triggered by situations with only a slight resemblance to the original traumatic event: from gunfire to firecrackers, from near drowning to water in general. The fear spreads itself wildly from one object to another, finding similarities everywhere. You might be afraid of anyone with a certain accent because of a bad experience with someone with that accent years earlier. The spread is not entirely unintelligible, but it is certainly extreme and unruly. Whole populations can become fixated on a certain fear object as a result of their other fears. This is fear overflow, fear misdirection, fear shift. Fear will readily swap one fear object for another without much regard for rational justification. It is just too labile—too ready to attach itself to inappropriate objects. Fear fizzes away inside, searching for an outlet, and it can easily be redirected to objects not deserving it.  [2] We need a catchy phrase for this so that its prevalence can be memorably captured (compare the phrases “confirmation bias”, “cognitive dissonance”, “sublimation”, “projection”, and the like): how about “fear shift” or “fear retargeting” or “fear transference”? It is the marked tendency of fear to latch onto anything in the general vicinity—the analogue of loving any blonde person because you love one blonde person (which is not a real thing).

            We have all heard FDR’s famous statement, “The only thing we have to fear is fear itself”. Is this true—can it be true? Can we fear fear? You can be afraid that you will feel fear in the heat of battle, and you can be afraid that other people will be afraid of you and hence attack you: but can you fear fear itself? The answer would appear to be No: for what is there about fear in itself that should occasion fear? How can that emotion be a proper object of fear, any more than other emotions? Can you be frightened of hate as such (as opposed to its possible consequences)? There is nothing intrinsically dangerous about fear considered in itself: it is just a feeling. You can be afraid of the consequences of fear (you might ignobly run away when battle is joined), but the emotion itself is not fearsome. So despite the ability of fear to take objects seemingly at random, it cannot take itself as object—any more than happiness can be feared. Have you ever heard of a case of fear phobia? The statement in question is at best misleading; it must mean something like, “We should be afraid of the consequences of a certain kind of fear, such as violent action”. With respect to fear itself, it is not so labile as to be able to latch onto that.  [3] Can you fear prime numbers or remote galaxies or moral values or electrons? Doubtful—though celery and butterflies evidently can arouse real fear. So fear is not crazily labile, just pretty damn indiscriminate. In understanding and mastering it we need to be aware of its power to mutate and metamorphose and redirect, but we needn’t be concerned to curb it in relation to everything. We must not be paralyzed by fear or dominated by it or bamboozled by it, but we do need to respect its powerfully protean character. It is exceptionally plastic, malleable, and volatile, but not absolutely bonkers. Fear is not a form of insanity, though it comes close sometimes.

Freud thought that sexual desire lies behind almost every aspect of mental life, so that it needs to be understood and regulated; the more plausible view is that fear gets its talons into almost everything, so it needs to be understood and regulated. This applies as much to private life as to international politics. We don’t need to fear fear; but we do urgently need to understand it. It is reported that a few people feel no fear as a result of physical abnormality (the amygdala is supposed to be involved): it is hard for the rest of us to grasp what this must be like (envy would not be inappropriate), but certainly such individuals have a very different mental life from the rest of us. They are not plagued by this wayward, erratic, alarmingly anarchic force; they are not victims of their own cerebral fear centers. The existentialists focused on anxiety (angst) as the prime emotional mover of human life, but fear is surely the more pervasive and active force in our lives, in all its varieties and manifestations. We rightly fear a great many things, but we also unreasonably fear many things too. It is hard not to see our fear responses as a botched evolutionary job—cobbled together, out of control, riddled with design defects. Apparently, different components of it evolved during different evolutionary periods, as ecological demands changed over time; it was not intelligently designed to know its proper scope and limits. Fear is a biological mess, a simmering hodge-podge, and certainly not designed with our happiness in mind (it clearly contraindicates the idea of a divine creator). Not having it at all might not be such a bad idea. Imagine going to the dentist with no fear in your heart! You could still make rational judgments about possible sources of danger, but no more of that nasty oppressive emotion clogging up your brain. We all have to master our fear by effort of will, recognizing that it is not always beneficial; why not make a drug that simply removes it from the human psyche?  [4] Pain, yes, that seems necessary to a safe and successful life; but fear we could definitely live without. Do we really need our eye-watering fear of death? That fear is a serious blight on our life (animals are happily free of it and do quite well in its absence): we don’t need that biting searing debilitating feeling clouding our days! Fear is not something we should simply accept as a fact of life; maybe it is just a temporary aberration in human history. We could certainly do with less of it, or at any rate a more rationally ordered fear economy. Wouldn’t it be nice to live just one day without fear of any kind?  [5]

 

  [1] When does fear enter human life? It doesn’t seem to exist in the newborn, except perhaps in a very rudimentary form; it awaits the development of reason. It must be a traumatic experience when fear finally makes its appearance: “What is this horribly upsetting feeling I’m having?” The Garden of Eden was clearly a fear-free zone until knowledge and sin introduced fear into human existence. Fear and knowledge are closely intertwined: you can’t fear what you don’t know. When will it be over? Only with death apparently: then fear will be no more.

  [2] Thus fear is easily manipulated: it is just so mobile and malleable. Fear is the secret weapon of the dictator—his or her raw material (fissile material, we might say).

  [3] What if someone said, “I am not afraid of anything except being afraid”? Wouldn’t we reply: “So you aren’t afraid of the consequences of being afraid either but just of the emotion itself—that makes no sense”. The only way we could be afraid of fear is in virtue of its unpleasant phenomenology—its kinship to pain. But could we be terrified of that unpleasantness? It is like the idea of being in love with love: could you be desperately in love with it? It can’t be just like other love objects. Metaphor is at work in such locutions.

  [4] This drug would make Heidegger’s philosophy virtually obsolete. And could Kierkegaard have written Fear and Trembling and Sickness unto Death?

  [5] Then too, there is the question of shame: people tend to be ashamed of their fears and don’t like talking about them. Do we really need the burden of shame in addition to the fear that prompts it? Aren’t we burdened enough already?

Share

Identity

 

Identity

 

Philosophical logicians usually distinguish between qualitative and numerical identity. The former can hold between one object and another, meaning exact similarity (we can also define a notion of partial qualitative identity). Numerical identity (which from now on I will simply call identity) is supposed to relate objects only to themselves: nothing can be identical, in this sense, to an object that is not it. It is supposed that every object stands in this relation to itself, using “object” in the most capacious sense to include numbers, properties, functions, processes, etc.  Identity appears to hold even between fictional objects and themselves—Sherlock Holmes is identical to himself. So the relation of identity is absolutely universal; moreover, it is necessary—everything is necessarily identical to itself. This is not true of qualitative identity, since it can be contingent that two objects are exactly similar. It is commonly accepted that the identity relation holds trivially of everything: just by being something an object is self-identical. For this reason some people have felt that identity is a pseudo relation—that there is something suspicious about it. It does seem exceptionally uninformative to be told that an object is identical to itself (tell me something I don’t know!). Anyway it is supposed that we know what we are talking about: we know what “identical” means in this special sense—to the point that we can recognize the concept as fishy in some way. But do we really know what identity is in the intended sense? Do we have a genuine concept of identity? Can we articulate what we mean by the word?

            It might be thought that we have a number of possible avenues of explication available: Leibniz’s law, a famous dictum of Frege’s, and the involvement of sortal concepts in identity.    [1] Taking the last first, it is sometimes said that identity statements are incomplete without the specification of a sortal, as in “Hesperus is the same planetas Phosphorous”; accordingly, identity is sortal relative, or at least sortal dependent. Thus we can explicate the nature of identity by saying that it essentially involves the kinds of objects—same planet, same animal, same number. It is not just an elusively bare abstract relation that holds indifferently of everything thing there is (and is not); it has specific concrete substance (in the Aristotelian sense of “substance” as well). But this doctrine cannot be right: it confuses identity statements and identity facts. Maybe statements of identity need sortal supplementation (or maybe not    [2]), but the nonlinguistic fact of identity surely does not. How can an object’s identity with itself depend on its kind? What does that even mean? Does it mean that there is no relation of identity except one that incorporates a sortal kind—as with planet-identity, animal-identity, and number-identity? But it is hard to see what this is supposed to mean: aren’t these all instances of identity tout court? Isn’t there an overarching relation of simple numerical identity? Nor is it clear how much elucidation this doctrine affords: we are still left wondering what the import and point of identity is supposed to be. I am the same human being as myself: big deal, what’s the point of saying that? We have a bunch of sortal-relative identity relations, but we still don’t know what they are exactly—and why objects bother to instantiate them. What does it mean to say that x is the same F as y? What is this sameness with oneself?

            Here we reach for Frege’s famous dictum: identity is that relation a thing has to itself and to no other thing. This is ritually intoned, as if it contains self-evident wisdom, but it is not critically examined. The thought is that other equivalence relations don’t satisfy the definition because they can relate an object to other objects: for example, I am the same height as myself, but also the same height as other people—whereas I am identical to myself, but not identical to anyone else. The identity relation can only relate an object to itself, but other equivalence relations can hold between an object and itself and other objects. There are three problems here. First, what about a universe empty save for one object? In such a universe sameness of height does not relate me to other objects, since there are none—so it would count as the identity relation according to Frege’s dictum. Intuitively, what have other objects got to do with my self-identity? Not relating me to other objects can hardly count as essential to my identity with myself. Second, the dictum is circular if offered as a definition: for we need to understand what is meant by “other things”. Surely this phrase means “things not identical to the given thing”, but then the concept of identity is being presupposed: you already have to grasp what identity is before you can understand Frege’s dictum. Third, and most telling, the dictum doesn’t single the identity relation out even in the actual world; other relations satisfy Frege’s condition. Take the part-whole relation: certain objects stand in this relation to me, but they don’t stand in this relation to anyone else. My right arm is part of me but not part of anyone else—so the part-whole relation holds between me and parts of me but not between me and parts of other objects. You might object that the part-whole relation doesn’t relate me to myself but only to parts of me (unlike the same-height relation), but consider the relation of improper part: that does relate me to myself but not to anyone else—I am not a part, proper or improper, of anyone else. Yet part-hood and identity are not the same relation. This could have been expected on intuitive grounds, because Frege’s dictum is very general—identity is being said to be anyrelation that relates an object to itself but not to anything else. That is unlikely to single identity out uniquely, save per accidens (this why it fails for the single-object universe). The dictum fails to capture the specific idea of numerical identity. Not that Frege meant it as a strict definition (he was too circumspect for that) but more as a useful heuristic; in any case, it doesn’t help with the task of giving the notion of identity clear content. We can’t complacently cite the dictum as explaining what that concept consists in.

            Third, we have Leibniz’s law of the indiscernibility of identicals: does this tell us what we are talking about when we talk of identity? The law has the advantage of being true, indeed necessarily true, but it is limited as a method for explaining the concept of identity. It offers only a necessary condition to begin with, and the converse principle is far from self-evident on a natural understanding of it (i.e. the identity of indiscernibles). But the main problem is one of triviality: what precisely does this law assert? It is awkward to state because we have to say something like, “If two objects are (numerically) identical, then they must share all their properties”. Two objects? We blushingly shift to using variables: “If x and y are identical, then they must share all their properties”. But this is scarcely any better—x AND y? What is really meant is just that an object is always exactly similar to itself: an object is always qualitatively identical to itself. Where there is numerical identity there is qualitative identity. True enough, but does it help with understanding the concept of (numerical) identity? We are being told that objects always have the properties they have and no others—again, that is hardly news. But worse, it uses qualitative identity to explain numerical identity: it derives the latter concept from the former. Construed as an effort to get a handle on identity proper, it invokes qualitative identity, stating weakly that objects are always exactly like themselves. Surely we are entitled to expect something better—something meatier, more apropos (but see below). So we are still lacking any decent account of what this alleged special relation of numerical identity comes to—some kind of elucidation, analysis, insight. Instead we just have the identity relation staring blankly and inarticulately back at us, hoping we will somehow get the hang of it. It seems unnervingly self-effacing.

            At this point the disturbing figure of the skeptic enters the conversation: what if there is no such relation as identity? It is proving so elusive because it isn’t really there. What do we mean when we say an object is identical to itself—what are we thinking? Nothing, according to the skeptic: our wheels are spinning, our thought process deceiving us. It might be contended, skeptically, that the concept has its origin in certain epistemic and linguistic practices but that it has no reference in objective reality; it is a kind of illicit projection, a phantasm of the intellect. Things are not round and heavy and red and self-identical: that last is just not a real property of things, but a reification of our epistemic and linguistic practices. We often don’t know that we are dealing with a single object and can therefore discover the truth of a statement of the form “x is identical to y”, but that doesn’t imply that the real world contains objective identity relations. The concept of identity is useful to us in recording our epistemic dealings with the world, but it shouldn’t be taken to denote a genuine constituent of objective reality—for what kind of constituent is it? Do we see or touch it, or need it in our scientific theories, or feel it in ourselves? Why not just admit that it isn’t part of a truly objective conception of things (perhaps rather like the commonsense concept of an object). There are objective similarities between things to be sure, and we can speak of things as indistinguishable, but the idea of numerical identity is a chimera. So says the skeptic, and he is not without rational grounds for his opinion. However, the position is extreme and I am inclined to suggest something weaker, though in the spirit of the skeptical position. This is that talk of numerical identity is best interpreted as an extension of the concept of qualitative identity, which is perfectly meaningful: to say that an object is identical to itself is just to say that it is exactly similar to itself. Two distinct objects can be exactly similar, thus warranting talk of identity between them (“these two balls are identical”), and a single object can be exactly similar to itself too. So there is not an extra primitive relation in the world called “numerical identity”; this talk is really just the application of the concept of qualitative identity to solitary objects. Every object is qualitatively identical to itself—that is, every object is self-identical in just the sense that two objects can be said to be identical. There is really just the concept of qualitative identity, and it can hold between distinct things or one thing and itself. Of course, statements of qualitative identity between an object and itself are trivially true, but then so is the proposition that every object is self-identical. An advantage of this way of seeing things is that we need not recognize any ambiguity in the word “identical”: it always means so-called qualitative identity. And there is little intuitive plausibility in the view that “identical” varies in meaning as between a numerical and a qualitative sense. If this position is correct, there is no identity relation such as philosophical logicians have supposed—no separate kind of identity; there is just a single relation of similarity—but objects can stand in this relation to themselves. To be self-identical is to be self-similar. I am completely similar to myself, hence “self-identical”. We can easily specify what this identity relation consists in: the sharing of properties. We know what properties are and we know what sharing them is—well, that is what identity is all about. This relation can hold between several objects and it can hold between a single object and itself. If I say that I am identical to myself, I am saying that I am exactly similar to myself—just as I can be similar to other people (perhaps exactly similar). The statement is no doubt peculiar, because hardly disputable, but it is the interpretation that makes the most sense of identity talk; it’s either that or skepticism about the whole concept. We could try maintaining, feebly, that the concept of numerical identity is primitive and inexplicable—simply not capable of any articulation—but that seems unattractive in the light of the alternatives. It is preferable to hold that so-called numerical identity is analyzable as reflexive qualitative identity. After all, that relation clearly exists and has a clear content—why introduce anything further?

            What are the consequences of this revision in the way we think of identity? All those puzzles of identity must now be recast in terms of self-similarity, as must the idea of a criterion of identity. This may not (should not) make a difference to the substantive issues, but to be clear in our mind we should think in the recommended terms. There is nothing real to identity over and above self-similarity. And since philosophy is very largely concerned with questions of identity, particularly the identity of concepts and properties, the revision must have an impact on how we understand philosophical questions. Concepts (meanings, intensions, properties) can be exactly similar to themselves, this being what concept identity comes down to. If the concept of knowledge, say, has a property not possessed by the concept of true justified belief, then the two concepts cannot be identical; for then the concept of knowledge would not be qualitatively identical to the concept of true justified belief. Identity is always qualitative identity, so concepts can’t be identical unless they share the same qualities (this is Leibniz’s law in another form). In a way the concept of identity already contains Leibniz’s law, because what it means to say that x is identical to yis defined as sharing the same properties. It is not some further tacked on thesis that identical objects are always exactly alike: self-identity simply is sharing the same properties—x being identical to x just is x being qualitatively identical to x. This is why Leibniz’s law is so self-evident: it is really a kind of tautology. This is as it should be.    [3]

 

    [1] I have consigned to a footnote another familiar attempt to explicate identity because the attempt barely gets off the ground and is lamentably confused, namely that identity is a relation between signs. That is, for objects to be identical is for them to be the single denotation of two terms. The trouble, obviously, is that object identity can’t depend on language. Still, the suggestion is helpful in illustrating what a substantive account of identity might look like: at least we are given a nontrivial analysis of the concept (just a wrong one). Compare: identity is the co-reference of ideas in God’s mind—substantive enough but none too plausible.

    [2] The sortals go into the fixation of reference not the type of identity relation involved, as in “that elephant is identical to that elephant” said while pointing to different parts of the same elephant. But objects cannot be incomplete in the way bare demonstratives are.

    [3] Old hands will see the imprint of several philosophical logicians on what I write here (Geach, Wiggins, Kripke, and others). It should be evident that what I say is radical to the point of heretical—I myself have always assumed that numerical identity is transparently a concept in good standing. I am as shocked as anyone by the skeptical reflections herein sketched (contrast my Logical Properties (2000), chapter 1). 

Share

Blind Consciousness

 

Blind Consciousness

 

Consciousness is information laden. Not only does it supply information about the external world, it also informs us about itself and our own body. In being conscious we find out about the world outside us and about our own subjective state and bodily condition. The faculties used to acquire these sorts of information are usually labeled perception, introspection, and proprioception. Consciousness is thus highly informative, a source of knowledge (do we have any other source?). It seems geared to disclosing facts about the world, providing us with a torrent of data; it is information rich. But there is one area in which it is conspicuously silent: information about our brain states. Nothing in consciousness reveals what is going on in the brain when consciousness itself is operative. True, we can look at the brain from the outside by opening up the head, but we receive no information about the brain simply by being conscious: no amount of concentrating and training can enable you to discover what is going on in your brain just by surveying your current state of consciousness. So far as consciousness is concerned, you don’t even have a brain—though assuredly consciousness affirms the existence of the external world, itself, and the body. This is a strange state of affairs, because the brain is surely, to put it mildly, closely involved in the activities of consciousness. Nothing is closer: everything that happens in consciousness depends on the brain (on neural activity), minutely and inescapably. Consciousness is basically a function of the brain—an aspect of the brain. You would think, then, that this alliance would show up in consciousness: consciousness would make it evident where its foundations begin and end. But no, nothing: it is as if consciousness refuses to acknowledge its origins in the brain. It won’t admit to its cerebral roots, keeping them discreetly out of the field of awareness. Nor are there any pathological conditions in which the activities of neurons force themselves into consciousness—no abnormalities in which the facts of neural life intrude upon the conscious state. No one ever suffers from “brain suffusion”—a sudden sense of cerebral susurration. It may be wondered why this is—why the deafening silence about the brain’s role in generating consciousness? Is it because it would be distracting and pointless? But consciousness is always distracting us from the main focus of the moment, and it might sometimes be useful to know what is going on inside your brain (injuries, diseases, etc.). This looks like a puzzle of nature: why the selective blindness? Consciousness is generally generous with information, but here it is stingy to the point of absolute silence. It is as if it has been designed to be selectively blind—as if the brain is a taboo subject. One would think it was ashamed of the brain, like a mad relative in the attic.  [1] Nor is this selective blindness a necessary truth: we can imagine that it was not so—a species of sentient beings might well enjoy consciousness of its own brain, treating this is as completely routine, like our awareness of our limbs or lungs. Knowledge of the brain might be part of folk psychology.

            What are the implications of this selective blindness? And what is its general character? You might think it provides an argument against materialism: for if materialism were true, wouldn’t its truth be evident to consciousness? Is it that consciousness supplies no information about the brain because it isn’t the brain—because it isn’t a material thing? Maybe so, but note that consciousness tells us nothing about a supposed immaterial substance either: it is equally blind about that. It is blind about whatever constitutes its underlying basis; it turns a blind eye towards its underpinnings. In any case, consciousness certainly has neural correlates, and it systematically ignores these correlates. The point concerns the epistemic capacities of consciousness not its metaphysics: it has a blind spot. It systematically conceals from itself its origins in the brain. Perhaps there is an inhibitory mechanism in the brain that prevents the brain from knowing about itself via consciousness: switch off that mechanism and knowledge of the brain will come flooding in. More likely, there are just no receptors capable of processing information about the brain in the brain—no “cerebroreceptors” analogous to photoreceptors. Whatever the mechanics of the matter may be consciousness evidently declines to dabble in its cerebral origins. Is it because consciousness itself has a nature that precludes such awareness? Does it form a kind of barrier to knowledge of the brain? People have felt that perceptual consciousness works as a barrier in relation to the external world (hence the sense datum theory); perhaps it also acts as a barrier to the world of the brain. We can picture consciousness in one of two ways: it is either like a thin translucent film clinging to the brain, or it is like a think opaque fabric cloaking the brain. If the former, then it should provide glimpses of what lies beneath—fissures, furrows, axon-nucleus-dendrite structures. If the latter, then it is simply too thick to afford any such glimpses: more like a brick wall than a wispy veil. I think there is something to this thickness intuition, though it is hard to articulate it clearly: one has the image of layers of consciousness forming a dense membrane that shrouds the brain in darkness. But then why is it that the barrier only exists where the brain is concerned and not in relation to the external world or the body? The question quickly becomes hopelessly obscure. Yet it seems true that consciousness is not like a diaphanous veil in relation to the brain but more like a dense and opaque shield or mask. This is the ontological counterpart to the epistemic point that consciousness doesn’t reveal the brain even slightly. Perhaps we should be struck anew by the miraculous ability of consciousness to reveal the external world, it being in here and the external world being out there: how does this preternatural reaching-out work?  [2] It does feel somehow natural to suppose that consciousness is a self-enclosed reality (which is why Brentano’s thesis of intentionality is so striking): it has often been supposed that it can only ever contain information about itself as a self-enclosed reality. In any case, the matter is obscure and difficult, though worth thinking about. At present, I conclude, the explanation of brain-blindness must remain mysterious: it is evidently a fact, and a deep fact, but it is hard to see what accounts for it. It seems like an inexplicable contingency, yet deeply rooted—one of many mysteries of nature.

            What I think I can do is provide an account of what this selective blindness consists in—what kind of phenomenon it is. And this will lead us to a property of consciousness that has not heretofore been recorded. I spoke earlier of a blind spot: the literal meaning of this phrase concerns the physical eye and its physiology. Each eye has a spot on the retina that contains no photoreceptors, where the optic nerve joins the retina: nothing can be seen at this spot, though we don’t normally notice it. The eye is thus selectively blind for hard anatomical reasons. It is easy to see that if the retina suffered damage to any region a similar blind spot would result, and indeed this occurs in certain circumstances. In cases of macular degeneration (and in other conditions) just such a blind spot results—it is called a scotoma by neurologists. From a phenomenological point of view this localized blindness does not appear as a gap in the visual field—a blank space with perceived boundaries—but rather simply as an absence of vision. It is as if a certain part of the visual field disappears, but without any awareness of an empty patch. The blind spot itself is thus conceived as a kind of natural scotoma, though one that is harmless enough. This concept has proved sufficiently useful that it has been extended to refer to other sorts of selective blindness–in particular, personality scotomas, in which a person is unaware of facets of his or her personality that are evident to others. It is a form of ignorance (“neglect”) that stems from a perceptual lack—a type of blindness. The word “scotoma” comes from a Greek word for darkness, and the concept is apt: the neglected aspect of reality lies in darkness for the would-be perceiver. We are familiar with the phenomenon of degraded vision at the periphery of the visual field (“semi-scotoma”, as we might call it); in full scotoma we have a complete absence of vision at a specific locus of the visual field, more or less extensive. Scotomas are generally acquired, but they could be genetic: we can imagine someone born with a rather large scotoma at a particular place in the visual field. Unlike total blindness, the person with a scotoma experiences undiminished awareness surrounding the affected area—so they can have normal visual acuity except in a certain location.

You can see where I am headed with this: brain blindness is a type of scotoma. That is, our inability to know about the brain via consciousness is an example of a scotoma at a higher level of cognitive function (it doesn’t concern the anatomy of the eye). It is a natural scotoma that we are born with not the result of injury or disease, but it functions just like other sorts of scotoma. We can imagine someone born with awareness of his own brain that then suffers an injury that removes this area of awareness: that would be the analogue of macular degeneration. As it is our brain blindness is like our genetically determined blind spot—just the way we are hooked up and hardwired. We are born with a kind of partial blindness. In both cases the blindness is surrounded by normal vision, indeed by excellent information reception, but it just so happens that there is a confined area of total blankness. We suffer from a scotoma with respect to the brain, which we take for granted. Beings naturally equipped with brain awareness might pity us for our selective blindness, but we have known nothing else so we take it in stride (compare a land of the exclusively color blind). Presumably this is true of all terrestrial animals, so we can say that all life on earth suffers from brain blindness: terrestrial sentience is scotoma-prone sentience. Consciousness contains a big gap where the brain ought to be.  [3] And ironically this gap is precisely where the very origins of consciousness lie: consciousness thus has a blind spot for what makes it possible. We have no awareness of what makes awareness exist in the first place. The relative in the attic is actually our own nature as sentient physical beings. We are systematically blind to what enables us to see. A fine conceit, to be sure: our maker must be tickled pink. Consciousness is constitutionally blind to its conditions of possibility.  [4]

            It is inviting to postulate that consciousness has a characteristic not usually remarked upon: it is scotoma-prone. It is certainly rich in information content, acutely receptive to large sections of reality, but it is also selectively blind to certain parts of reality, for reasons not easy to fathom. Consciousness has intentionality, subjectivity, privacy, unity, a subject, and blind spots. It is, we might say, scotomic: gappy, holey, dark in places. In particular, it is blind to its own enabling conditions in the brain. The brain exists in close proximity to the conscious mind, closer than any other bodily organ, but for some reason consciousness refuses to disclose any information about the brain: it is studiedly vacant on the question. Its epistemic field contains a gaping hole where the brain ought to be. We can try to direct our consciousness to this area of neglect, or even attempt to train ourselves to be more receptive, but we come up empty handed. It really is as if we have an area of unalterable blindness where the brain is concerned. The only way we can know anything about the brain is by observing it from a third-person point of view. Of course, if the mind is really just an aspect of the brain, then we know about this aspect of the brain by introspection; but what we don’t know are the other aspects—those that involve neurons and their electrochemical processes. We don’t even know these aspects partially from the first-person point of view, as is the case for other bodily organs; our ignorance of the brain is far more principled. And this is particularly puzzling given that the brain is the closest thing to the mind in the entire universe. Shouldn’t we be aware of it at every moment? Why isn’t it constantly on our mind? It is as if the mind were intentionally keeping the brain a secret from us.  [5]

 

  [1] It might be thought that the brain just happens to be one of those organs of the body that keeps itself to itself, like the spleen or kidneys, so that there is nothing remarkable about its invisibility to consciousness. But (a) even those organs sometimes reveal themselves in abnormal circumstances (often in the form of pain) and (b) the brain is in its nature right next to consciousness, so that its doings could hardly be missed—yet it remains unrevealed.

  [2] This might be the central question in the philosophy of perception, which is a lot harder than people generally realize.

  [3] Another possible scotoma might be the unconscious: we have no inner eye capable of revealing the unconscious, even under special conditions. We have to infer the unconscious not know it by direct introspection. If we think of conscious awareness as a scanning device, then we can say that it is unable to scan the unconscious or the brain—though it can scan the external world, itself, and the body. It suffers from scanning gaps.

  [4] To repeat, I am not including using our actual eyes to look at the brain from the outside, a highly unusual form of knowledge as things normally stand.

  [5] Could it be that the conscious mind also keeps the self a secret? That is, do we have a scotoma with respect to the self? Hume famously found nothing when he searched within himself for himself, concluding (on one interpretation anyway) that there is no such self to find. But an alternative view would be that we have a blind spot with respect to the self: the self exists all right, but it doesn’t disclose itself to our spotty introspective powers. Selective blindness is thus wrongly interpreted as non-existence. One can imagine the same conclusion being drawn about the brain if we lacked the ability to observe the brain perceptually: we simply don’t encounter it in introspection. That is true enough, but it is false that the brain doesn’t exist. Likewise the self exists but at an introspective blind spot. So that makes three scotomas of consciousness: the brain, the unconscious, and the self. Any more? 

Share

Philosophical Destruction

 

 

Philosophical Destruction

 

The destructive impulse is particularly conspicuous in philosophy. We are forever refuting, criticizing, rejecting, disagreeing, ridiculing, dismantling, tearing down, cutting to pieces, grinding to a fine powder, annihilating, and otherwise smashing to smithereens (or sometimes mildly amending and carefully reformulating). We also construct and create, but a lot of the time our aims are less positive. Why is this? It doesn’t seem to be so in other disciplines, or at least not to the same degree: physicists and biologists don’t spend most of their time ripping each other to shreds, or even correcting and revising what others have to say. They are too busy in the lab or field, discovering things, contributing to human knowledge; but we philosophers always seem to be at each other’s throats. Thus critical acumen is much prized in our field—so-and-so is said to be very “sharp” and a demon with the deft counter-example. Reputations can be built around one “devastating” criticism. Entire philosophies can be primarily destructive: empiricism, positivism, ordinary language philosophy, Wittgenstein in both his periods, Quine much of the time, Berkeley’s idealism, Hume’s skepticism, existentialism, neurophilosophy. There is nothing wrong with that: philosophy really is a highly critical discipline. The whole subject is infused with negativity. Is it because philosophy is so hard? Is it that philosophical problems are so resistant to solution? It’s easier to criticize than to construct, destroy rather than create. We are frustrated by our chosen subject, but we can at least vent our frustrations in bouts of mutual destruction, otherwise known as civilized debate. This can be enjoyable enough, thrillingly ego-driven, and even fairly amusing—better than banging your head against a brick wall. I get a kick out of it anyway. The predominant feeling you have when trying to construct something in philosophy is a sense of vulnerability: someone is going to destroy what you have struggled to create, undermining your carefully constructed arguments, and gutting you intellectually. This is not a fun feeling; better to rip into someone else and watch the feathers fly. Yes, it’s faintly discreditable, a bit of a cop-out, but at least you are achieving something with your time (that arduous education has not gone completely to waste). Or you could just become an historian.

            But let us analyze the destructive impulse in philosophy further: what is philosophical destruction? What are you aiming to do, and for what purpose? The OED puts us on the right track (as always): “destroy” is defined as “put an end to the existence of (something) by damaging or attacking it”. We are informed that the English word derives from an Old French word destruire, which is constructed from de expressing reversal and struere meaning “build”. Thus “destroy” literally means, “de-build”. It is like “depopulate” or “desegregate” or “deactivate” or “deconstruct”. This suggests that you cannot destroy what has not been built (though that concept is quite flexible and is not limited to human artifacts): it sounds funny to speak of destroying a heap or a wreck or a mess or even a random chunk of rock—for none of these things bespeaks constructive purpose. You primarily destroy what has been created, generally with a purpose in mind: human artifacts, human lives, animals, plants, buildings, systems of thought, arguments. Destroying is the opposite of building. Also, according to the dictionary, it involves two elements: existence and damaging by attacking. First, the thing destroyed has to exist: you can’t destroy what has no existence (fictional characters, hallucinated rabbits, meaningless jumbles of words). Second, the destruction (the “putting out of existence”) is accomplished by means of an aggressive act—attacking and damaging. This latter point is important: there is no avoiding the involvement of violence in philosophical criticism. You can’t destroy (demolish, decimate) an argument or a position without doing violence to it—as you can’t destroy a material object without doing violence to it. We may as well own up to this and not pussyfoot around: philosophical destruction is inherently aggressive, necessarily a form of attack (not on the person, of course, but on the position being destroyed). The aim is to put that position out of existence, i.e. refute it—overtly and publicly. Ultimately this means that we want to destroy belief in that position: we want to take existent beliefs and put them out of existence. That is what it is to put a philosophical position out of existence—to destroy belief in it. If I am trying to refute a certain position, I am aiming to destroy whatever belief others may have in that position: I am trying to eliminate a part of the other’s mind, or a component of his or her mental state. This is why it is apt to speak of destruction in connection with philosophical (and other intellectual) criticism: criticism is not intended to leave your mind unchanged, your beliefs happily intact. Let me be a touch melodramatic: refutation is murdering beliefs—putting them out of commission, consigning them to the Big Sleep. It is outright belief annihilation.    [1] So there is always something pugilistic and gladiatorial at play in philosophical disputation: a life is at stake, namely the life of the participants’ beliefs (opinions, commitments). Something is liable to go out of existence during a philosophical argument. No wonder there is resistance, tension, risk, fear, self-defense, and an atmosphere of combat: something’s very existence is at stake, often beliefs that have taken a lifetime to arrive at (all that work and puff!). We can also note that there is something godlike about the activity: the creative artist is often compared to God on account of her ability to bring things into being apparently from nowhere, but the power to destroy is also a feat of existence alteration—putting things out of existence. God has this power too, in abundance, and we humans can exercise it ourselves in more limited ways. No doubt the literal murderer relishes exercising this awesome power. Just so the philosophical “murderer” can relish his power to destroy positions and belief in them: look how I just annihilated that poor shmuck’s position! We humans have power over existence (as existence has power over us) and philosophical creation and destruction partake of that power. Don’t you think the positivists relished the destructive power of their philosophy? They put those babbling metaphysicians to the guillotine! They didn’t have much constructively to put in its place (the dirty little secret of positivism), but they could certainly do a tremendous amount of damage to existing belief. Their weapon of choice was the verification principle: wielding this sharp-edged sword they could cut traditional philosophical thought to pieces. It was a rotting zombie anyway, in their considered opinion, so it may as well be put out of its misery. On a lesser scale of destructive euphoria, we have the work of Strawson on Russell’s theory of descriptions, Quine’s “Two Dogmas of Empiricism”, Gettier on the analysis of knowledge, Kripke on the description theory of names, and Grice on conversational implicature—to pick some examples pretty much at random. And there is nothing in principle wrong with such destruction (or purported destruction)–with such aggression and annihilation; as I remarked, it is an essential part of philosophy. Philosophy is supposed to be destructive (though not only destructive).    [2]

            And here we run up against a further category of philosophical destruction (perhaps my personal favorite): destroying the destroyer. The positivist gunman swaggers into town ready to do some serious destroying: in his sights lie all traditional metaphysics, much of morality, and a good deal of science. Then the wily sheriff (silent, gruff) steps nimbly forward to stem this tide of destruction, revealing the blustering stranger as a fake and a wimp—a no-account loud-mouthed nobody. His draw is slow and his aim shaky: he gets handily refuted before he can do much damage (though there was that unfortunate business with the town drunk in the saloon, poor old Jeb). This is what was so gratifying about Grice on implicature and Kripke on necessity: they destroyed the would-be destroyers. They wielded superior weapons and exhibited superior skills, and the opposition went down. In so doing they resurrected ideas deemed extinct, bringing traditional questions back to life. For it is never a happy moment when a philosopher kills off an idea that doesn’t deserve that fate; and we welcome the savior who restores to life what had been thought extinguished for good. Even the problem of consciousness has been thought extinguished, only to come roaring back to life once certain misguided ideas have been exploded. So we must always be on the lookout for opportunities to reverse previous acts of wanton destruction (or alleged destruction). I would say the same for a lot of what Wittgenstein has been thought to have terminated; ditto for Quine. So I particularly relish the dismantling of such would-be destroyers. Destruction sometimes needs to be destroyed in turn.

            We should distinguish two sorts of destructive philosophical act: destroying existing philosophical theories and destroying common sense (or possibly parts of accepted science). Philosophers are generally perfectly comfortable with the first sort of destruction, but the second is regarded as far more problematic. Preventing commonsense belief from destruction may involve destroying the arguments of anti-commonsense philosophers. It is, of course, controversial what counts as part of common sense (Berkeley’s idealism being a famous case), but we usually know if commonsense belief is being criticized. Overtly nihilist positions are typically destructive of common sense, intentionally so: asserting that nothing exists must surely count as destructive in this way. This is the analogue of destroying noncombatants in a war: other philosophers are soldiers in wars of mutual destruction, but ordinary folk are the equivalent of peaceful civilians. They are more likely to be invoked in the battle against opposing philosophers than made victims of destruction themselves. Scorched earth tactics against civilians are not most philosophers’ style, while the belief systems of other philosophers are considered fair game. If you go into philosophy, this sort of aggressive action is only to be expected. You can avoid it only by not sticking your neck out, possibly limiting yourself to destructive philosophy and never venturing anything constructive of your own (though you must cover yourself against destroyer destroyers). Frege is a good example of a philosopher who does the opposite: though he certainly offers criticisms of positions he is against (e.g. psychologism), he mainly tries to construct something positive. He erects an impressive edifice, systematic and precise, thus exposing his creation to possible destructive criticism (Russell’s paradox was surely a heavy blow). He is perhaps the most constructive analytical philosopher ever. Wittgenstein, by contrast, is relatively destructive. Russell lies somewhere in between. Quine is largely destructive, with occasional gleams of creativity. Kripke is a bit of both. Socrates was entirely negative. Plato and Aristotle leaned positive. And so on: each philosopher in the canon can be considered as a destroyer or as a creator. I myself am quite fond of a bit of philosophical destruction, but I also have a weakness for construction—so I am well aware of destructive efforts aimed in my direction (again, nothing wrong with that).    [3]

            What are the devices of philosophical destruction, its techniques and technology? They are well known to the trained philosopher: the detection of fallacies, the exposure of non-sequiturs (both subtle and gross), producing counterexamples, exposing use-mention confusions and type-token errors, and so on. With these weapons we carry out our destructive work, and valuable work it is. Logic is best seen in this light: it is a device of destruction, the equivalent of a deadly weapon of war. It isn’t used much for constructive purposes (not counting logicism and ascriptions of logical form), but it is the bread and butter of discursive demolition. Logic is what we wield when we set about demolishing a position—its main purpose is destructive. It is concerned with exposing logical fallacies rather than constructing logical arguments; its function veers negative. It is all about what does not follow. This should be built into the way logic is taught: it is a machine for winnowing out faulty reasoning. It isn’t a method for having new ideas but a device for destroying existing ideas (bad ones). I mean this in a broad sense of “logic”—not just propositional and predicate calculus but informal logic too (also induction and abduction). Logic is concerned with evaluating reasoning, and evaluation implies criticism, which implies destruction. Accusing someone of begging the question, say, is a destructive act: what the speaker just said has been reduced to rubble. Again, we should not pussyfoot around about this: philosophical expertise is like the expertise of a demolition man—both are good at destroying buildings, physical or intellectual. Both do valuable work, by clearing sites of rickety old buildings so that new ones can replace them. Philosophical destruction, recognized as such, is nothing to regret or feel ashamed of. It is the engine of truth.    [4]

 

    [1] There are echoes of Popper’s “critical philosophy” here—the emphasis on refutation and falsification instead of verification. Popper thinks that scientific progress occurs mainly by a process of elimination, much like natural selection: we don’t proceed by confirming good theories but by falsifying bad ones. Destruction of accepted theories is thus the engine of scientific advance. We might call this “discovery by destruction”. Philosophy, for a Popperian, is similar: the elimination of theories that fail to stand up to criticism. The counterexample is the engine of philosophical advance. Aggressive criticism is the means of achieving philosophical truth.   

    [2] Let me indulge in a medical analogy: it is part of medicine to destroy pathogens, the better to improve health. This destruction is essential to medicine and not at all sinister. Wittgenstein compared philosophy to therapy, which emphasizes improvement on the part of the patient; but he never explicitly drew attention to the point that destruction can also be part of the process (though he was himself flagrantly destructive). In fact, even Freudian psychotherapy has a destructive component, since harmful neuroses and repressions need to be destroyed (talking was supposed to do that). Isn’t psychiatry really concerned with the destruction of mental illness, using drugs, ECT, brain surgery, behavioral modification, and what- not?

    [3] If I ask myself what are my favorite examples of philosophical destruction, the answer comes back: Frege’s criticism of Mill’s theory of numbers, and Leibniz’s criticism of Locke’s theory of ideas. And who could not love Chomsky’s demolition of Skinner’s Verbal Behavior (though this is not strictly philosophy)? These all contributed massively to human thought.

    [4] Of course, there is a psychology and sociology of intellectual destruction in academic culture that can only be described as deplorable. I cannot even bring myself to discuss this lamentable subject. Its chief defect is confusion(the harshest word in my vocabulary). 

Share

Epistemic Nihilism

 

Epistemic Nihilism

 

When we speak of nihilism we are apt to think of moral nihilism, the kind of thing discussed in Turgenev’s Fathers and Sons or by Nietzsche or the existentialists. This is the idea that moral values are fictitious, spurious, and non-existent. But the term itself is broader than that, deriving from the Latin “nihil” meaning “nothing”. The OED gives us two definitions: “the rejection of all religious and moral principles, often in the belief that life is meaningless”, and “Philosophy the belief that nothing has a real existence”. The latter is striking suggesting as it does the radical metaphysical position that nothing at all exists. Quite what the scope of the quantifier may be is left up to us, but we may suppose that ontological nihilism is intended, i.e. that no mind-independent entities exist. It would not be denied that thoughts exist. The position I want to consider here is both more and less modest than that: it is the thesis that knowledge does not exist. There is no such thing as knowing: all talk of knowledge is so much fiction, reification, and false objectification. Knowledge is like the unicorn: a mythical entity. Epistemic nihilism is to be distinguished from skepticism, which concerns what is known not the alleged state of knowing itself. Maybe nothing is known, or very little, but the concept of knowledge is a concept in good standing—we know what knowledge would be. There is such a state, but we are seldom if ever in it. By contrast, the epistemic nihilist holds that the state of knowing is a non-existent state—possibly an incoherent one. We should therefore eliminate the concept from our conceptual scheme, or keep it only under strict instructions about how it is to be understood (see below). The epistemic nihilist is like the moral nihilist: both think that the things in question simply lack real existence. There is no such thing as right and wrong, and there is no such thing as knowledge. This is consistent with allowing for the existence of many other things (actions, beliefs); it is specifically moral values and states of knowledge that are declared to be nothing.            [1]

            What reasons might be given in support of epistemic nihilism? They are for the most part familiar, but not usually considered as leading to such a radical conclusion. I will merely list them, with the aim of giving the flavor of the position. First, the concept (and therefore the thing) has resisted adequate definition for over 2,000 years, ever since Plato raised the question. We all know that true justified belief fails to add up to knowledge proper (Russell, Gettier). Even now we cannot say what knowledge is, despite our best efforts. The nihilist takes this to show that knowledge is nothing definable: the reason it can’t be defined is that it has no reality to be defined. No one is ever in such a state (even when the skeptic has been silenced). Second, there are deep puzzles about knowledge, also ancient: we can’t say how a priori knowledge is possible, and there are problems about the nature of empirical knowledge.            [2] How can we come to know things by pure reason—what kind of process is this? What explains it? And why is it that the world can only be known by sense experience? Thus some have supposed that so-called a priori knowledge is not really knowledge at all, since it concerns only tautologies or human conventions (there are no real propositions to be known in this way). And others have doubted that experience can ever add up to genuine knowledge: knowledge must be more than experience alone, but what is that more? Does knowledge have a foundation in experience, and how does that work exactly? Is knowledge simply coherence of belief? But how does mere coherence suffice for truth (a requirement of knowledge)? We thus cannot secure a prioriknowledge or a posteriori knowledge. Both are profoundly problematic. The epistemic nihilist sees in these failures a reason to doubt that the concept of knowledge is a workable concept—that it denotes anything real with which we have to come to terms. We may as well just get rid of it. Third, skepticism shows that the concept has no actual application—and what is the point of a concept that never applies to anything? Even when it does apply (e.g. knowledge of our own subjective experience) it has only limited application: in most of its uses it is falsely applied. There is clearly something amiss with a concept like this. Surely it must have been introduced in error, before its consequences were thought through. It is just a hopelessly shambolic concept, containing the seeds of its own destruction. Why bother keeping a concept that leads to such outlandish results? Why not declare it a would-be concept sorely in need of a real-world correlate? It simply doesn’t stand for anything real. Fifth, there are serious problems about what the state of knowledge could be. Here we may compare knowledge with meaning: is there any fact of meaning something by our words—what could constitute meaning?            [3] Similarly, is there any fact of knowing—what could constitute such a fact? Meaning is not an introspectable state of consciousness, nor is it a disposition, nor a brain state: so what is it? What determines whether we mean one thing or another by our words? Similarly, knowledge is not an introspectable state of consciousness, nor is it a disposition (there are always performance errors), nor a brain state: so what is it? What determines whether we know one thing rather than another? It seems to be an infuriatingly elusive kind of fact, like meaning. Hence we get indeterminacy claims and suggestions of non-factuality. Maybe we can salvage talk of knowledge by invoking non-standard semantic theories—as with criteria-based assertibility conditions theories—but then we have already conceded that the world contains no actual state of knowledge. The word “know” has a use, but it denotes nothing real. This is the analogue of anti-realist assertibility conditions theories of meaning. The epistemic nihilist may grudgingly accept such a diluted picture of knowledge for the sake of giving the word “know” a role in our language games, but still insist that nothing real is being designated. The word “know” is like the word “trustworthy”—expressing an attitude we can have towards certain individuals, not denoting an objective property of them. We thus give a pragmatic or expressive account of such talk without supposing that anything objectively real is going on. Or we might simply abolish the whole knowledge language game as so much error and myth. In either case we need not worry that we are failing to grasp the nature of something real—for there is nothing real there to grasp. Knowledge has no nature, no real essence, and no objective constitution—any more than unicorns do (or phlogiston or fairies). It is not as if we discovered the state of knowledge by means of diligent scientific observation. We just find ourselves with the concept, justifiably or not. Perhaps concepts like belief, truth, and justification can function as criteria of assertion for knowledge sentences, but they are not to be construed as denoting real constituents of a language-independent fact of knowing. We thus adopt an “error theory” of knowledge talk, like error theories of talk of meaning or moral value. We thereby extend nihilism into new territory.

            The advantage of taking this route is that it dissolves problems, as pragmatic and expressive theories are intended to do. We find ourselves burdened with a concept that is riddled with conundrums, mysteries and puzzles, and we summarily dismiss them by declaring outright non-existence. All the classic problems of theology disappear once God is declared dead. Similarly for the problems of ethics, given that there is no such thing as right and wrong; same for meaning, if meaning proves intractable; same for consciousness, if consciousness remains mysterious; and same for knowledge, if knowledge presents irresoluble difficulties. Come to think of it, isn’t the concept of knowledge tied suspiciously closely to religious ideas? God is defined as an all-knowing being, but according to religious nihilism there is no such being. Popes and priests are traditionally supposed to be epistemic authorities (revelation etc.), but that is a preposterous presumption. Is the concept of knowledge really an insidious way to reinforce social hierarchies and foster superstitions? Who possesses real knowledge and who doesn’t?            [4] Much the same has been said of moral value, which is similarly divisive. So the concept of knowledge might be thought to have dubious historical roots as well as being internally defective. The epistemic nihilist proposes to do away with knowledge as part of serious ontology, perhaps allowing such talk a limited pragmatic role (there certainly can’t be any science of it). Belief, yes, perhaps even justification, but not knowledge—not that old shibboleth. We needn’t keep trying desperately to find a definition for it, or figure out how it is possible, or what justifies it, or how it is related to experience, or what varieties of knowledge there might be, or whether there is any knowledge at all, or whether ethics (say) is an example of knowledge. Epistemology gets reconfigured, with knowledge losing its central place, or any place. True, the new type of knowledge-free epistemology is radically different from the old, but so was post-Copernican astronomy radically different from what went before, or Darwinian biology, or Einsteinian physics, or secular morality, or democratic politics. In order to make these great advances we often need to reject the existence of things hitherto taken for granted (divine design, vital spirits, the ether, the immortal soul, God-given commandments), but this can lead to exciting new vistas. What will epistemology look like without the obsession with knowledge? The epistemic nihilist boldly goes where no epistemologist has gone before. She points out the theoretical advantages, and the removal of troublesome questions, and the easing of our abiding sense of futility. We will still have good thinkers and bad, reliable informants and unreliable ones, real science and pseudo-science; we just stop characterizing all this with the archaic concept knowledge. The epistemological landscape will not be rendered depressingly deserted; it might even be fuller and healthier, with the air easier to breathe. The nihilist with respect to Hell, the Devil, and demons is a welcome presence, bequeathing to us a far healthier spiritual world; the nihilist with respect to knowledge hopes to achieve a similar result. She sees herself in a positive light, not as a spoiler and naysayer, but as a bringer of good news not bad. Just think: you will never have to berate yourself again for not knowing something! We will still have perception and memory, belief and inference, good reasons and bad, but we won’t need to aspire to something called “knowledge”—whatever that might be exactly. For the human cognitive system is never in a state that can be so characterized. The word “know” is the analogue in psychology and philosophy of “vital spirits” in biology. Knowledge has no place in science, and no place in common sense either.            [5]

 

Colin

            [1] In fact the two issues are not unconnected, since knowledge is commonly regarded as a normative concept—it is what belief aspires to be. Knowledge is often included on lists of things that are good intrinsically. The moral nihilist may thus have knowledge in his sights too, as dubiously value-laden.

            [2] See my “The Puzzle of Empiricism” and “How is A Priori Knowledge Possible?”

            [3] I am alluding to Kripke’s Wittgenstein on Rules and Private Language. We could, in fact, rephrase Kripke’s discussion of the “skeptical paradox” of meaning in terms of knowledge of meaning: is there a fact of the matter about whether I know that “+” means addition? Does John know that emeralds are green or is it that he knows that emeralds are grue?

            [4] The hero (victim) in Nabokov’s Invitation to a Beheading is scheduled to be executed for the crime of “gnostical turpitude”. What is heresy but claiming to know that what the church authorities say is knowledge isn’t knowledge? Religion is an epistemic battleground.

            [5] The case of knowledge of language is interesting: do we know the grammar of our language? That has always seemed problematic if we are working with the concept of knowledge that is regularly analyzed as true justified belief (plus some). Mastery of language certainly involves ability, competence, and internal representational structure: but does it involve knowledge? We can simply dispense with that question once the concept of knowledge has been banished from serious discourse; and doesn’t this seem particularly appropriate where linguistic mastery is concerned? We can also retain such notions as knowing-how and knowing-whom: it is knowledge of facts (knowledge-that) that has caused all the trouble. Have we illicitly extended the concept of knowledge from its unproblematic uses (“he knows how to play piano”) to create a concept that is obscure at best (“he knows that there is a table there”)? Not to mention “he knows that 2+2=4” and “he knows that genocide is wrong” and “he knows that he is in pain”. These all raise red flags, and the epistemic nihilist has an explanation of why: we are throwing the word “know” around recklessly and don’t really know what we mean. We do better to keep the word “know” under strict supervision and not let it spread to places where it doesn’t belong. 

Share