Knowledge By Necessity

                                               

 

 

 

Knowledge By Necessity

 

 

We can know that a proposition is true and we can know that a proposition is necessary, but can we know that a proposition is true by knowing that it is necessary? Consider a simple tautology like “Hesperus = Hesperus”: don’t you know this is true by seeing that it is necessary? If someone asks you why you think it is true, you will answer, “It couldn’t be otherwise, so it has to be true” or words to that effect. The sentence is clearly necessary, so you can infer that it must be true. You treat the modal proposition as a premise to derive the non-modal proposition. The former proposition acts as the ground of your knowledge of the latter proposition. You can tell just from the form of the proposition that it must be true, and thus it is true. You derive an “is” from a “must”. You really can’t help seeing that the sentence expresses a necessity, given that you grasp its meaning, and truth trivially follows. We can call this “necessity-based” knowledge: knowledge that results from, or is bound up with, modal knowledge. How else could you know the proposition to be true—not by empirical observation, surely? You know it by analysis of meaning: the meaning is such as to make the sentence necessary. The sentence has to be true in all possible worlds, given its meaning, and so it is true in the actual world—truth is a consequence of necessity. It is immediately obvious to you that the sentence is necessary—and so it must also be true. If someone couldn’t see that “Hesperus = Hesperus” is necessary, you would wonder whether he had understood it right. Maybe someone could fail to see that necessity entails truth and hence not draw the inference; but how could he fail to see that “Hesperus = Hesperus” is a trivial tautology, in contrast to “Hesperus = Phosphorous”? The sentence is self-evidently a necessary truth.

            It thus appears that some knowledge of truth is necessity-based: knowledge of the truth involves knowledge of the necessity, with the latter acting as a premise. Sometimes people believe things to be true because they perceive them to be necessary. You know very well that Hesperus is necessarily identical to Hesperus—how could anybody not?—and so you are entitled to believe that “Hesperus is Hesperus” is true. For analytic truth generally the same epistemic situation obtains: you can see the sentence has to be true given what it means, so it follows that it is true. Even if the move from necessity to truth is not valid in every case (e.g. ethical sentences), it is in some cases. We can thus derive non-modal knowledge from modal knowledge. But clearly not all knowledge is like this—mostly you can’t come to know truths by perceiving necessities. You can’t come to know the truth of “Hesperus = Phosphorus” that way: here you have to investigate the empirical world. The sentence is necessary, but you can’t use this necessity to decide that the sentence is true. You may know that if it is true then it is a necessary truth, but you don’t know that it is true just by understanding it, so you can’t use its necessity as a premise in arguing that it is true. You need to appeal to observation to show that the sentence is true—as you do for any other empirical proposition. Here your knowledge is observation-based not necessity-based—observable facts about planetary motions not the analysis of meaning. You won’t cite tautology as a reason for truth in this case, but you will in the other case. You won’t argue that there is no alternative to being true for “Hesperus = Phosphorus”. Clearly you can’t argue that “Hesperus = Phosphorus” follows from “Possibly Hesperus = Phosphorus”, but that is the only modal truth you have at your disposal in your current state of knowledge, unlike the case of “Hesperus is Hesperus”. So you can’t take a short cut to knowledge of truth by relying on an evident necessity—you have to resort to arduous empirical investigation. You may wish you knew that the sentence is necessary, so as to spare yourself the epistemic effort, but that is precisely the knowledge you lack in this case, since the expressed proposition in question refuses to disclose this fact. We resort to observation when our modal sense cannot detect necessity, which is most of the time. Necessity-based knowledge is quick and easy, unlike the other kind.

            I have been leading up to the following thesis: a priori knowledge is knowledge by necessity while a posteriori knowledge is not knowledge by necessity. Here we define the a priori positively and the a posteriori negatively, unlike the traditional definition in terms of knowledge by experience versus knowledge not by experience. This gives us a result for which we have pined: a positive account of the nature of a priori knowledge. The two definitions map onto each other in an obvious way: knowledge by necessity is not knowledge by experience, and knowledge by experience is not knowledge by necessity. That is, we don’t come to know necessities by experiencing them, and necessities are no use to us in the acquisition of empirical knowledge. Necessity plays a role in acquiring a priori knowledge, but it plays no role in acquiring a posteriori knowledge. To have a crisp formulation, I shall say that a priori knowledge is “by necessity” and a posteriori knowledge is “by causality”—assuming a broadly causal account of perception and empirical knowledge. We can also say that a priori knowledge is knowledge grounded in our modal faculty, while a posteriori knowledge is knowledge grounded in perception and inference—thus comparing different epistemic faculties. But I think it is illuminating to keep the simpler formulation in mind, because it directs our attention to the metaphysics of the matter: modality in one case and causality in the other. The world causally impinges on us and we thereby form knowledge of it, and it also presents us with necessities that don’t act as causes—thus we obtain two very different kinds of knowledge. The mechanismis quite different in the two cases—the process, the structure.

            Is the thesis true? This is a big question and I shall have to be brief and dogmatic. There are two sorts of case to consider: a priori necessities and a priori contingencies. I started with an example of a simple tautology because here the necessity is inescapable—you can’t help recognizing it. Hence knowledge of necessity is guaranteed, part of elementary linguistic understanding. But not all a priori knowledge is like that, though tautology has some claim to be a paradigm of the a priori. What about arithmetical knowledge? If it is synthetic a priori, then we can’t say that knowledge of mathematical necessity results from linguistic analysis alone. Nevertheless, it is plausible that we do appreciate that all mathematical truths are necessary; we know that this is how mathematical reality is generally. When we come to know that a mathematical proposition is true we thereby grasp its necessity: a proof demonstrates this necessity. Mathematics is arguably more about necessity than about truth: we can doubt that mathematical sentences express truths (we might be mathematical fictionalists), but we don’t doubt that mathematics cannot be otherwise—it has some sort of inevitability. We might decide that mathematical sentences have only assertion conditions, never truth conditions, but we won’t abandon the idea that some sort of necessity clings to them (though we may be deflationary about that necessity). Modal intuition suffuses our understanding of mathematics, and this can function in the production of mathematical knowledge. I see that 3 plus 5 has to be 8, so I accept that 3 plus 5 is 8. Mathematical facts are inescapable, fixed for all time, so mathematical truths are bound to be true: I appreciate the necessity, so I accept the truth. The epistemology of mathematics is essentially modal and this plays a role in the formation of mathematical beliefs: in knowing necessities we know truths—and that is the mark of the a priori.  [1]

            Much the same can be said of logic, narrowly or widely construed. You cannot fail to register the necessity of a logical law, and you believe the law because you grasp its necessity. Nothing could be both F and not-F, and so nothing is. The necessity stares you in the face, as clear as daylight, and because of this you come to know the law—the knowledge is by necessity. Accordingly, it is a priori. It isn’t that you can believe in the truth of the law and remain agnostic about its modal status (“I believe that nothing is both F and not-F, but I’ve never thought about whether this is necessary or contingent”). Your belief in the law is bound up with your belief in its necessity; thus logical knowledge is a priori according to the proposed definition. The same goes for such propositions as a priori truths about colors: “Red is closer to orange than blue”, “There is no transparent white”, etc. Here again the necessity is what stands out: we know these propositions to be true because we perceive their necessity—not because we have conducted an empirical investigation of colors. Accordingly, they are a priori. In all the cases of the a priori in which the proposition is necessary this necessity plays an epistemic role in accepting the proposition; it is not something that lies outside of the epistemic process. It is not something that is irrelevant to why we accept the proposition. We recognize the necessity and that recognition is what leads us to accept the proposition. If we accepted the proposition for other reasons (testimony, overall fit with empirical science), then our knowledge would be a posteriori; but granted that our acceptance is necessity-based the knowledge is a priori. Being known a priori is being known by necessity: the involvement of modal judgment is what defines the category.  [2] By contrast, a posteriori knowledge does not involve modal judgment—you could achieve it and have no modal faculty at all. The basis of your knowledge is not any kind of modal insight, but observation and inference (induction, hypothesis formation, inference to the best explanation). You don’t have modal reasons for believing that the earth revolves around the sun, but you do have modal reasons for believing that red is closer to orange than blue—viz. that it couldn’t be otherwise. Since things couldn’t be otherwise, they must be as stated, and so what is stated must be true. The modal reasoning is not a mere add-on to knowledge of a priori truths but integral to it.

            It may be thought that the contingent a priori will scuttle the necessity theory, since the proposition known is not even a necessary truth; but actually it is not difficult to accommodate these cases with a little ingenuity. One line we could take is just to deny the contingent a priori, and good grounds can be given for that; but it is more illuminating to see how we could extend the necessity theory to cover such cases. Three examples may be given: reference fixing, the Cogito, and certain indexical statements. What we need to know is whether there are necessities that figure as premises in these cases, even if these necessities are not identical to the conclusion drawn from them. Thus in the case of fixing the reference of a name by means of a description (e.g. the meter rod case) we can say the following: “No matter what the length of this rod may be the name ‘one meter’ will designate it”. If I fix the reference of a name “a” by “the F”, then no matter which object is denoted by that description it will be named “a”. This doesn’t imply that the object named is necessarily F; it says merely that the name I introduce is necessarily tied in its reference to the description I link it to. Because we recognize this necessity we can infer that a is the F(no matter who or what the F is). We don’t need to undertake any empirical investigation to know that a is the Fsince it follows merely from the act of linguistic stipulation—and that act embodies a necessary truth (“the person designated by ‘a’ is necessarily the person designated by ‘the F’”).

In the case of the Cogito it is true that the conclusion is not a necessary truth (since I don’t necessarily exist), but there is a necessary truth lurking behind this proposition, namely “Necessarily anyone who thinks exists”. It is a necessary truth that thinking implies existence (according to the Cogito), but it is not a necessary truth that the individual thinker exists—he might not have existed. I know that I exist because I know that I think and I know that anything that thinks necessarily exists. Thus I use a modal premise to infer a non-modal conclusion: from “Necessarily anything that thinks exists” to “I exist”. That is my ground for believing in my existence, according to the Cogito, and it is a necessary truth. Thus the knowledge derived is a priori, according to the definition. I don’t make empirical observations of myself to determine whether I exist; I rely on a necessary truth about thought and existence, namely that you can’t think without existing. I know that I exist (contingent truth) based on the premise that anything that thinks exists (necessary truth), so my knowledge essentially involves the recognition of a necessity.

Thirdly, we have “I am here now”: this expresses a contingent truth whenever uttered but is generally held to be a priori. I know a priori that I am here now, but it is contingent that I am here now. But again there is a necessary truth in the offing, namely: “Anyone who utters the words ‘I am here now’ says something true”. By knowing this necessary truth I know that I must be speaking the truth when I utter those words, but my utterance expresses a contingent truth. So I rely on a necessary truth to ground my belief in a contingent truth. Without that necessary truth I would not know what I know, i.e. that my current utterance of “I am here now” is true. Again, the case comes out as a priori according to the definition; we just have to recognize that the modal premise need not coincide with the conclusion. We can have a priori knowledge of a contingent truth by inferring it from a distinct necessary truth. So we have found no counterexamples to the thesis that all a priori knowledge is knowledge by necessity.

            I have assumed so far that the type of necessity at issue is metaphysical necessity, not epistemic necessity. This is the kind of necessity we recognize when we come to know something a priori. But we could formulate the main claims of this essay using the concept of epistemic necessity. For simplicity, just think of this as certainty, construed as a normative not a psychological concept—not what people are actually certain of but what they ought to be certain of. Then we could say that when I am presented with a tautology I recognize that it is certain and infer from this that it must be true, and similarly for other cases of a priori knowledge. This approach converges with the account based on metaphysical necessity, because certainty and necessity correlate (more or less) in cases of a priori truth. But I prefer the metaphysical formulation because it connects an epistemic notion with a metaphysical notion—a priori knowledge with objective necessity. When I know something a priori I know it by recognizing the objective trait of necessity not a psychological trait of certainty (however normatively grounded). Thus the epistemological distinction has a metaphysical correlate or counterpart. To know something a priori is to know it by detecting an objective fact of necessity, though we may also be certain of what we thereby know. In contrast, to know something a posteriori is not to know it by necessity detection but by perception and inference (by causality). This is a deep and sharp distinction, and it at no point relies on a purely negative characterization of what we are trying to define. We really do know things in two radically different ways: by apprehending necessity or by registering causality.            

 

 

  [1] Perhaps part of the attraction of the view that mathematics consists of tautologies is that it comports with the idea that our knowledge of mathematics involves knowledge of necessities. The necessities occupy the epistemic foreground.

  [2] Given this account of a priori knowledge, it is doubtful that animals have it, because they lack modal sensitivity—they don’t perceive that propositions are necessary. If you present an animal with a tautology, it will stare at you blankly. They may have innate knowledge, but they don’t have a priori knowledge. Not even the most intelligent ape has ever thought that water is necessarily H2O or that the origin of an ape is an essential property of it. Animals have no knowledge of metaphysical necessity. This explains their lack of a priori knowledge.

Share

The Utra-Selfish Gene

                                               

 

 

 

The Ultra-Selfish Gene

 

 

In David Attenborough’s nature documentary Frozen Planet there is some remarkable and rare footage of polar bears mating. The male begins a twenty-mile trek through deep snow lured by the scent of a distant female. He catches up with her and engages in courting behavior, which is not guaranteed to have a positive outcome. He meets with success, however, and there is some rather touching footage of the act of intercourse, which both seem to enjoy. Does the male then peel off and return to his solitary ways, confident that he has done his reproductive job? No, he continues to accompany the female in order to fend off potential rivals intent on impregnating her. Rivals indeed duly appear, determined individuals by the look of them, and there is distressing footage of bloody and prolonged fighting between the males. The original male succeeds in repelling the suitors, but he is wounded and exhausted from the effort. After a few days he deems it proper to leave the female in the belief that his sperm will not by displaced by anyone else’s. The two bears part company in a way that doesn’t seem particularly wistful and we learn from Attenborough that they will not meet again, the cubs to be raised by the mother. He remarks that the male is probably relieved to have the whole thing over with so that he can return to his peaceful solitary life. He ambles off into the sunset, bloody and worn out, but with mission accomplished.

            The question is why the male is prepared to go to so much trouble and take such risks. He could easily have been killed in one of the fights and might yet die from the wounds already inflicted. It can’t be because of the satisfaction he knows he will derive from his offspring or the prospect of future copulations with the female, since none of that will happen. Can you imagine any human male who would behave in such a selfless manner? First you copulate with a female and then you wait around to engage in possibly fatal fights with a series of nasty new suitors? Surely a male human would depart the scene long before having to face such rivals, even if that meant his sperm might be displaced by a fresh batch. It seems remarkably contrary to the bear’s individual self-interest: what does he get out of this? Don’t say he gets offspring—that is not a point about his desires and interests. From his selfish point of view it would be better to cut and run—his life would go better without all the waiting and fighting. So why does he do it? The answer, of course, lies in the genes: the genes program him to act in this way—to ignore his own best interests and engage in acts of self-sacrifice. They program him to act unselfishly, even to the point of potential suicide (presumably many bears do die in such fights).  [1] They do this because their sole concern is to make it into the next generation—their survival is at stake. It doesn’t matter if the animal that carries them dies in the process, so long as they get passed on. They are concerned about their own survival not the survival of their bodily vehicle. They would program an animal to kill itself if that achieved their need for immortality: that is, genes for suicide would survive better than genes for self-preservation if the former method led to more effective gene transmission.  [2] Their interests do not coincide with the interests of the animal that harbors them, though there may be overlap. 

            Thus I wish to say that the genes are ultra-selfish. They never program their host animal in a way that respects the interests of that animal. They don’t have an altruistic bone in their body. Sometimes people run away with the idea that the selfish gene is a gene for selfishness—genes act to make animals selfish. But this is a complete misunderstanding of the theory: it is the genes that are selfish, not the animal that contains them, and they can make the animal act in ways that violate its own self-interest, as with the persistent polar bear. Someone might reply that the genes can’t be completely selfish because they allow for the unselfish behavior of parenting and kin altruism. But the genes are not acting to benefit animals other than the one in which they reside; they are ensuring that their own survival is maximized—since they also reside in the bodies of genetically related animals. They program their carrier to help others for the same reason they program the bear to fight off rivals—to maximize their chances of surviving into the next generation. Whether any individual animal benefits is beside the point, at best a by-product of their selfish action.

            But surely the genes program animals to act in their own best interests most of the time—to be generally selfish. Don’t they implant a selfishness gene in the host animal? The reason for this is that the animal must survive if they are to survive, so the interests of the two coincide. Isn’t an animal a “survival machine” in the sense that its prime directive is individual survival? But if we look more closely even this is a distortion of the underlying truth. The animal isn’t aiming to survive but to reproduce—the former is just a means to the latter. Survival matters to the genes only because reproduction does, since that is what enables their immortality. The animal is less a survival machine than a sex machine (with apologies to James Brown): it is a machine for ensuring that reproduction occurs. If it were logically possible for an animal to reproduce without surviving to that point, that is how things would work (posthumous coition). Once reproductive life ends the animal is no use to the genes (except for extended family duties). From this perspective the selfishness of the genes should be apparent: they build and program an animal that will be an effective reproducer (gene transmitter), not one that will take its own survival and satisfaction as primary. They will make a body and mind geared to reproduction whether that suits the animal or not. This is why there are no contented and long-lived animals around that don’t reproduce—their genes don’t get passed on. Such an animal would be a genuinely selfish individual, caring only for itself and its own interests. But the animals that actually exist are not ideally selfish; in fact, they are slaves to the genes. The genes act always to serve their own interests, never the interests of their host—or any other animal. They are ultra-selfish.

            It might look as if the polar bear has an altruistic concern for the interests of future unborn generations, since he sticks around to make sure that his offspring will come to exist. But of course he has no such thoughts; and anyway they are dubiously coherent, since no such individuals exist at the time of the bear’s protective actions. The genes exist and are passed on (copies of them), but this has nothing to do with concerns about future generations and their happiness. The genes simply program the animal to blindly follow the directive of maximizing their presence in future animals. The animal will act unselfishly in order to obey this directive, even to the point of self-destruction. The genes program the animal to be unselfish because of their ultra-selfishness. So we must rid ourselves of the idea that the basic rule of life, seen from a gene’s point of view, is the production of selfish organisms: selfish genes are not genes for selfishness. Whether an animal is selfish or unselfish is neither here nor there; it all depends on what strategy best enables the genes to survive. Unselfish organisms are a good way in certain circumstances to further the interests of the ultra-selfish genes. If the selfish genes could achieve their desired immortality by building organisms that are entirely unselfish, they would; as it is they make them partially unselfish. Unselfish organisms are certainly what the genes need in certain situations—like the fighting polar bear. And the same is true for kin altruism, as well as for the basic design (physical and mental) of the organism. Reproduction is costly and dangerous in the state of nature; it isn’t what a determined egoist or hedonist would recommend. The genes make reproduction worth our while to some degree, but it isn’t the most prudent and self-serving of possible types of life. Animals are driven by their instincts (genes) in this direction, rather than deciding upon it as the most satisfying way to live (of course, it is possible to detach sex from reproduction). Selfish genes don’t make selfish organisms as a matter of course, and conceptually these are entirely separate matters. To repeat: selfish genes are not genes for selfishness. I would even say that, at a deep level, animals never act selfishly, precisely because they are controlled by ultra-selfish genes. They never put their interests above the interests of others, in

  [1] Then there is the question of the motivation of the mother: pregnancy and childrearing are not in her interests either, but they are in the interests of her genes. Motherhood has some claim to be the most diabolical invention of the genes—like carrying around a bomb. Motherhood has killed many a mother.

  [2] It’s hard to imagine how this could be so given the facts of biological life, but the conceptual point still holds: anything that enhances gene transmission will be selected for, no matter how unselfish it may be from the animal’s perspective. The wellbeing and survival of the individual generally lead to gene transmission, but this is not a conceptual necessity, more like a lucky accident.

Share

Philosophical Originality

 

 

Philosophical Originality

 

 

What produces philosophical originality? One answer is genius: from time to time a genius crops up and from his or her fertile brain originality flows. Then we have a golden age. No doubt the greatest philosophers were geniuses, so it is natural to suppose that this is what brings originality about. The trouble with this answer is that originality is too sporadic for this explanation to be plausible: geniuses will crop up in the population at a constant rate (assuming a genetic basis), but philosophical originality does not historically occur in this way—it comes in waves separated by arbitrary intervals of time. The best way to answer our question is to survey the history of philosophy (Western philosophy) and try to discern patterns and possible causes. Are there historical conditions that conduce to bursts of creativity?

            There are two possible types of explanation: internal to philosophy and external to it. Internal explanations say that the causes are internal to the subject of philosophy; external explanations say that the causes are external to the subject of philosophy. Thus either something about philosophy itself leads to innovation or something outside it does—or possibly both. I have come to the conclusion that the causes are principally external, and indeed that one type of cause is typical (which is not to say necessary). Obviously these are large historical and psychological questions, inherently difficult to assess, but a broad picture seems to emerge when we examine the history of the subject. Not to keep the reader in suspense, it appears that the prime cause of original thought in philosophy has been advances in mathematics. (I will restrict myself here to the parts of philosophy that don’t include ethics, aesthetics, and political philosophy—metaphysics, epistemology, logic, and related fields.)

Plato must be counted as a great original, and it is well known that he was much influenced by Pythagoras and his school. Greek geometry, later assembled by Euclid, formed the intellectual environment in which Plato forged his philosophical ideas. Thus we have the idea of a changeless perfect world of forms to be contrasted with what the senses reveal, where truths about this world can be established by rigorous proof. Geometry can be described as the mathematics of space, so it was the mathematical treatment of space that acted as a trigger to Plato’s originality. The objects of geometry supply the ontology and the method of proof supplies the epistemology—this is what a serious subject looks like. Aristotle continues in the same vein (substance and form) but reacts against it to some degree: he is less mesmerized by mathematics than Plato—but it forms the background to his thought. An intellectual stimulus can have either a mimetic or an antipathetic response. One can be creatively against something. Aristotle was against Plato’s excessively mathematical outlook and shaped his philosophy accordingly.

            There then followed a rather unoriginal period—the Middle Ages. During this time nothing comparable to Greek geometry occurred in mathematics and philosophy took no major steps forward (I am speaking broadly). Then we reach the Renaissance in which there was a great flowering: Descartes, Leibniz, Locke, Berkeley, Hume, and others. What happened? Physics is what happened—mathematical physics (Newton’s book is entitled Principia Mathematica). Calculus was invented and the mathematics of motion formulated. The physical world was conceived quantitatively, with mass, force, and motion mathematically measured. This new paradigm of knowledge led to a reinvigoration of philosophy—with adherents and dissenters  (notably Berkeley). It provides a framework for metaphysics (matter in motion) and an epistemology (observation and calculation), as well as a model of what a real science should look like. The question of materialism took on new life now that physics was in the ascendant. Thus a good deal of original philosophy was stimulated by the new mathematical physics—not from the insights of philosophers working on their own internal problems (worthy as that may be). The agenda was set, the map laid out—by a development in mathematics. Just as the major influence on Plato was a non-philosopher (Pythagoras), so the major influence on the philosophers of the Renaissance was a non-philosopher (Newton—also Descartes in his capacity as physicist and mathematician).

            Again, there followed a relatively static period in philosophy (though stirred somewhat by Darwin  [1]) until the dawn of the twentieth century. Then we have the spectacular rise of mathematical logic—the application of mathematics to logical reasoning. Frege, Russell, and Wittgenstein were philosopher-mathematicians impressed by the power of symbolic logic, with its formulas, proofs, and theorems. Russell and Whitehead’s Principia Mathematica was a mathematical treatise on the subject of valid reasoning (among other things), and it formed the shiny new object onto which philosophers could latch. Some saw it as the bright future of philosophy, others as its death knell. Again there is adherence and reaction: analytical philosophy versus continental philosophy (roughly), or the Tractatus and the Investigations. Mathematical logic played the historical role previously played by geometry and mathematical physics—a model and inspiration, or a threat to all that is holy. It was not the achievement of a professional philosopher qua philosopher that caused this ferment, but the achievement of mathematicians; the trigger was external to philosophy.

This stimulus received a boost later in the century, particularly from Turing, with the idea of a formal computation. This idea led not only to the computer but also to developments such as cybernetics, automata theory, and mathematical information theory. A new branch of mathematics supplied new tools with which to think about the mind and knowledge. The doctrine known as “functionalism” arose from these developments—a kind of mathematical theory of the mind (mental processes as functions from inputs to outputs, formally implemented). We are still living with Turing’s contribution in today’s cognitive science (including linguistics). And once again, there are followers and rebels—some who think we now have the key to understanding the mind, others who think the mind is quite other. It is the mathematical conception that sets the agenda and captures the imagination.  [2]Philosophy responded to computation theory as it did to the rise of mathematical logic. Nothing else has had this kind of impact on the field—not chemistry, biology, psychology, history, or whatever. Philosophy seems uniquely susceptible to the charms of mathematics. Not its slave, to be sure, but its keen observer, its ardent pupil–or its stern critic. You either love mathematical philosophy or you hate it.

            So now we have an interesting question: what is the next wave of mathematics that will drive the agenda of philosophy, shaking it up, reshaping the subject? We have had the mathematics of space, of motion, of logical reasoning, and of computation—what will it be next? I don’t think anything that now exists in mathematics can play the role played by these earlier innovations, so we need something new to get the ball rolling (whether we can achieve it or not). I suggest that what we need is a new mathematical theory of mind, especially of consciousness: we need a mathematical theory that does for the conscious mind what earlier mathematical theories did for space, motion, logical reasoning, and computation. I have no idea what such a theory might look like; my point is just that it would be likely to trigger a new wave of philosophical originality—perhaps greater than any seen heretofore. Think about it: a mathematical treatment of what lies at the center of human existence and human knowledge—what connects us to the world and to each other. Surely that would be an impressive body of mathematical thought with enormous implications. How would philosophy respond to it? What would it do to traditional philosophical problems? It would change the contours of the subject. Maybe we will have to wait a long time for a mathematical theory of consciousness to be constructed (look how long it took for the previous developments to come about), in which case we won’t see the degree of originality in philosophy that we saw in the earlier periods any time soon. Of course, I am speculating wildly and claim nothing more—it is an interesting idea to think about. There does seem to be an historical pattern here and a mathematical theory of consciousness would surely set the cat among the pigeons. It would set a standard of intelligibility and precision that isn’t even dreamed of today—a psychological Principia. The properties of consciousness would be as clear and exact as geometrical forms, motion through space, logical reasoning, and formal computation.

Mathematics crystallizes things, converts them into rigorous abstract patterns, and analyzes their structure, thus rendering them transparent to the intellect. This is why mathematical innovation impresses philosophers so much—it represents a distant ideal seldom if ever achieved in philosophy itself. We dearly wish that philosophy could achieve such clarity and precision—or we fear (some of us) that it would remove the charm of philosophical obscurity. Mathematics is like philosophy’s successful elder sibling, an inspiration and a rebuke. The affinity between mathematics and philosophy has often been remarked; it is no surprise, then, if philosophers keep a watchful eye on mathematics. When Spinoza wrote his Ethics in the style of Euclid’s Elements he was acknowledging the force of Euclid’s example. Empirical science can never exercise this kind of hold on the philosophical imagination because it is too caught up in the passing concrete empirical world; mathematics by contrast shares the abstract necessity of philosophy. Mathematics provides the kind of vision of things that philosophers (many of them) resonate to– so they are apt derive inspiration from it. Philosophers are mathematicians manqué.  [3]

 

Colin

  [1] Darwin’s theory has a mathematical aspect: a random process leads to the selection of organisms or traits that increase in frequency in a population. It is abstract and quantitative, a kind of algorithm; also statistical.

  [2] I should mention Godel’s results here—also mathematics with a large philosophical impact.

  [3] Philosophers who model their subject more on literature or history (such as Collingwood) recoil from mathematical philosophy; they cannot, then, use mathematics as a source of new ideas. Their philosophical tradition will be independent of mathematical innovations. But they are in the minority.

Share

Combining Concepts

                                               

 

 

Combining Concepts

 

 

We possess concepts and we combine them into thoughts. Those are deceptively easy words to say. What is this “possessing” of concepts? Somehow concepts are stored in the mind, unconsciously, but not in the form of use or mention: we are not using all of our concepts at any given moment, and we don’t store them by mentioning them in mental quotation marks (which itself would involve using a meta-conceptual concept analogous to using a quotation name of a word). Possessing a concept is not like possessing fingers or frontal lobes: concepts are not possessed in the way bodily parts are. They are more like memories (though not exactly so), but the nature of memory is puzzling too: how do memories exist in the mind? But the question I want to probe here concerns not possession but combination: What is involved in combining concepts?

            We don’t even have any bad theories to refute and ridicule. You might point to other uses of “combine” and compare the case of concepts to them. We combine ingredients to make a cake: but that operation is nothing like combining concepts to make a thought—it is not performed with the hands and there is no mixing. What about combining words into a sentence? Here we must tread carefully. If we just mean uttering words in temporal succession, then we know what that is, but it clearly isn’t what happens when we think by combining concepts. There is no uttering and we don’t just string concepts into a temporal sequence—they have to be properly combined to form something meaningful. If we mean combining words in the language of thought, then we have a special case of the problem: what is this combining? The pieces have to fit together, constitute a whole, and produce a proposition: how does the mind achieve this—by what process or mechanism? How, for example, are simple mental acts of predication generated? An individual concept is somehow hoisted into consciousness at the same time as a general concept and the two are somehow brought into juxtaposition. But what is this juxtaposition? It can’t be just that they exist side by side, spatially or temporally; they have to be combined. What is the mental glue? What is the mode of connection? A whole is assembled from parts, but what kind of assembly is it?

            We can imagine dualist or materialist theories of conceptual combination. The dualist theory is apt to be mainly negative: conceptual combination is not any kind of physical combination. It is not the joining together of extended things into a more extended thing, like pieces of Lego. Rather, the immaterial mind enables concepts to link up in a quasi-magical way, as only an immaterial mind can. The trouble with this is that it is not an explanation; and surely we don’t want the puzzle of conceptual combination to require dualism for its solution. The materialist view will maintain that combinations in the brain underlie conceptual combination—as it might be, the co-excitation of distinct neural networks. No doubt there exist underlying physical complexes in the brain, but it is hard to see how they could constitute and explain the combination of concepts. They exist at the wrong level of analysis; we should be able to say something about concepts as such that articulates what is involved in their combination. What is it about a concept that enables it to slide so smoothly into a linkage with another concept? What properties does it have that explain its combinatorial powers? There are theories about the referential powers of concepts (such as causal theories), but what theories are there about the power of concepts to hook up with each other? Concepts can combine with certain other concepts but not with others: what is the difference? You can combine the concept John with the concept house to get the concept John’s house, but you can’t combine John with Mary to get John Mary or house with planet to get house planet. Concepts can accept or reject potential partner concepts depending on their inner nature. They can repel or attract other concepts.

            We might now try to take a leaf out of Frege’s book: he said that some concepts are saturated and some are unsaturated.  [1] An unsaturated concept contains a space for a saturated concept, thus saturating it. This is no doubt an obscure doctrine, though not without some intuitive pull, but the question is how to apply it to psychological processes. Concepts (senses), for Frege, are abstract non-psychological entities, so his notion of saturation applies at that level: but how does it apply at the level of concepts in the psychological sense (“ideas” in Frege’s terminology)? In what sense is a psychological entity like my concept house “unsaturated”? This seems like metaphor or mumbo-jumbo (choose your poison).

We don’t experience the mode of joining that concepts undergo or engineer, so we can’t observe how the combination works. It is this secret joining that allows for the famed productivity and infinity of possible thoughts (and meaningful sentences), but it is quite opaque to introspection or any other mode of observation. If concepts didn’t combine, thought would be impossible, even quite limited thought. If concepts lost their ability to combine, through some sort of brain ailment, thought would stop dead in its tracks. The glue is at least as important as the items glued. But the glue doesn’t reveal itself—it is hardly as if concepts have sticky ends! Even metaphors are thin on the ground here; no possible theory suggests itself. One’s feeling is that joined concepts are a bit like people holding hands—there is a part that is designed as a gripping or hooking device. But this is absurd fantasy or pointless poetry not the beginnings of a theory. Alternatively, one speaks of synthesis: in conceptual combination a synthesis of concepts is formed. That sounds right enough, but again it is hardly a theory, more like a reformulation of the problem. For what is it to synthesize concepts? Complex concepts have parts that are brought together, but how are they brought together? What is this “bringing together”?

We know what combining physical objects is—spatial aggregation—but what is combining the units of thought? In Frege’s terms, what is the combination of senses (now construed psychologically)? Senses look outward to references, but they also look sideways to other senses—those that they can join with. It is written into a sense what it refers to, but it must also be written into it what it can combine with—with this but not with that. And it must be possible for senses to lock together into complex senses for the duration of a thought and then dissolve apart when the thought is over. Some operation splices one sense or concept to another, but then separation reasserts itself. There is a concept-combining device that moves concepts from where they are stored in the mind and forms strings of them displaying internal unity, and then disassembles these strings into their dormant isolated constituents. They are not combined in their stored form, being isolated units, though they are quick to enter into combinations; the combinatorial device imposes on them a kind of brief marriage with other concepts, quickly leading to divorce. Concepts thus flow in and out of combinations with other concepts; the puzzle is how they get cemented together for the duration. What is the composition of the conceptual glue? How do concepts find each other?

            Let me try to make the problem vivid by adapting Brentano. He introduced the idea of intentionality as a (non-physical) relation between a mental entity and something that exists outside of it but which is somehow its object: the mental entity is “directed towards” the object, intrinsically connected to it. The relation is somewhat obscure but it seems real enough—thoughts are obviously about things. Let’s introduce the idea of concept-to-concept intentionality, whereby a concept “refers” to any concept with which it can combine. It is written into a concept what kinds of combination it accepts and what it rejects. Furthermore, when a concept is acting as part of a combination it has this kind of horizontal intentionality vis-à-vis the concepts combined with it. There is a relation R such that the concept has R to the concepts combined with it. The concept thus both points outward beyond concepts and also inward to other concepts: it is combinatorial as well as referential. It has a kind of double intentionality. And it needs both aspects if it is to do its job as a constituent of thought: it needs two sorts of relation—inter-conceptual and extra-conceptual. Both are admittedly puzzling and evidently sui generis, but I find the inter-conceptual relation even more elusive and perplexing than the extra-conceptual. In virtue of what do concepts combine? Hume spoke of causation as the “cement of the universe” and found it puzzling; concepts have their “cement of the mind” and it too is puzzling. We don’t even have inadequate theories of it. Indeed, it is far from easy to make the problem visible.

 

  [1] Strictly, for Frege some concepts (“senses”) stand for saturated entities (“objects”) and some stand for unsaturated entities (“concepts” in Frege’s technical sense): but I am not concerned with the details of Fregean exegesis here.

Share

Puzzling Performatives

                                                Puzzling Performatives

 

 

J.L. Austin insisted that utterances of performative sentences are neither true nor false. If I say, “I promise to dine with you” my utterance has no truth-value. Presumably this implies that it expresses no proposition (though it is clearly meaningful), since if it did it would have to be either true or false. It is not, as Austin puts it, a constative. Performative sentences thus belong with interrogatives and imperatives, despite their declarative grammar. Some people have contended against Austin’s position, claiming that such utterances do have truth-value, being generally true. They say that the utterance is true precisely in case the speaker is making a promise: if I say, “I promise to dine with you” this utterance will be true if and only if I (thereby) promise to dine with you. After all, “You promised to dine with me” will be truly uttered by you in the circumstance that I made the utterance in question. Thus, it is contended, performatives are constatives and do express truth-evaluable propositions; no special category needs to be created for them. Who is right?

            There seems to be something correct in both positions. Let us assemble more data, which Austin would have approved. If performatives can be true we ought to be able to prefix them with “It’s true that”: so can I say, “It’s true that I promise to dine with you”? That sounds distinctly odd and not equivalent to the embedded performative. I don’t promise to dine with you by uttering such a sentence. Thus we have a breakdown of the usual equivalence of “p” and “It’s true that p”. I doubt that anyone has ever uttered such a sentence with the intention of making a promise or for any other reason. Compare “It’s true that I name this ship Bertha” and “It’s true that I hereby make you man and wife”. These may not be nonsense but they are close to it. They violate some sort of linguistic rule. But there are sentences in the vicinity that suffer no such defect and which cloud the issue. Thus there is nothing wrong with the following: “It’s true that I promised to dine with you”, “It’s true that I will promise to dine with you”, “It’s true that I ought to promise to dine with you”, and “It’s true that in saying ‘I promise to dine with you’ I thereby promise to dine with you”. We might even tolerate “It’s true that I am promising to dine with you”. And of course there is nothing amiss with “It’s true that you promised to dine with me” or even “It’s true that you are promising to dine with me” (uttered while I am mid speech act). The only one of these sentences that raises hackles is the present tense performative case: this is the one that I cannot prefix with “It’s true that”. Thus it can be true that I promised but I can’t build this locution into my promising explicitly—I can’t say, “It’s true that I promise to dine with you”. It is as if the truth cannot be said but only shown. This is very different from the case of ordinary assertion where I can happily add the truth prefix. It’s puzzling.

            Austin focused on the question of truth and performatives, but actually the issue arises more broadly. Consider “I know that I promise to dine with you”: this too has an odd ring, in contrast to “I know that I promised to dine with you”, along with the future tense and deontic variants (“I know that I ought to promise to dine with you”). The performative stands out as uniquely resistant to the epistemic prefix. And yet isn’t it true that as I make a promise I know I am promising? You can certainly say, “He knew he was making a promise” or even “He knows he is making a promise”, but I can’t say, “I know I promise”; and similarly for performatives of naming and marrying. So it is not just “true” that interacts oddly with performatives; “know” does too. Austin might respond to this by saying that performative utterances are neither known nor unknown (by the speaker)—they are not candidates for knowledge. Others may retort that many sentences containing the relevant verbs that do admit of “know” (e.g. “He knows full well that he promised to dine with me”). It is only the performative use of the verb that rejects the epistemic prefix.

What about other types of embedding? Consider negation: “It’s not the case that I promise to dine with you”. Again, this is a very odd sentence—is it some kind of negative performative used to decline to make a promise? You might say to me, “Promise to dine with me” and I might reply, “I won’t promise to dine with you”, but I won’t reply, “It’s not the case that I promise to dine with you”. What does that even mean? It doesn’t mean the same as “I promise not to dine with you” which has the look of a regular performative—I have made a promise by uttering it. And yet I can obviously fail to make promises. Nor is there is anything amiss with “It’s not the case that I ought to promise to dine with you”. Again, it is the performative case alone that declines negation. Imagine if instead of saying “Thank you for carrying my bag” I say, “I don’t thank you for carrying my bag”. Is this an attempt at a negative performative or just an inept way of saying “I’m not grateful for your carrying my bag”?

            It doesn’t end there, for consider: “Necessarily I promise to dine with you”. A linguistic monster indeed—what could it possibly mean? We can insert necessity all over the place with these verbs, but not there: I can say “Necessarily I will promise to dine with you” to express my belief in fatalism, or “Necessarily I ought to promise to dine with you” to express my deep moral convictions; but necessitating the performative itself would be a bizarre move in the language game. The same is true for “Possibly” or “It is contingent that”: we can’t put these in front of the performative either, but we can for other uses of the same verb. This is all grist to Austin’s mill because it confirms his doctrine that performative utterances are not statements at all but performances. If they were statements they could be true, could be known, could be negated, and could be necessitated; but instead they are acts performed by uttering words—acts of promising, naming, marrying, and thanking (and not acts of stating or asserting). If I promise to dine with you, I have performed an act like shaking your hand, and such acts are not true or false. If I could promise by some method other than saying  “I promise”, then there would be no temptation to suppose that promising is a kind of stating, since it need not be linguistic at all. Promising, greeting, thanking, marrying, and so on are not inherently linguistic acts—they could in principle be performed non-linguistically. We can talk about these acts and thereby speak truly or falsely, but the acts themselves aren’t true or false—though, as Austin reminds us, they can be performed more or less felicitously.

            So is it just wrong to suggest that performatives have truth-value? True, I can’t sensibly say, “It’s true that I promise to dine with you”, but does it follow that my speech act can’t be assigned a truth-value? When a person fails to name a ship by performing the ceremony, because he lacks the authority to name ships, isn’t it false that he named a ship? Can’t we say that his utterance “I name this ship Bertha” expressed a falsehood, since he failed to name the ship Bertha? That sounds reasonable enough, inescapable even, but we can’t convert this into permission to prefix performatives with the truth operator. So there is still something odd about performatives: even if they can be assigned truth-value, they differ from ordinary statements or constatives in that we can’t bring them within the scope of “it’s true that”. In fact, they also differ from non-indicative sentences in that these sentences really can’t be assigned truth-value (they don’t even look like statements). We can’t say, “It’s true that shut the door”, but we also can’t assign the truth-value True to “Shut the door” (when it has been shut). Imperatives shun truth altogether, while performatives tolerate it within limits. So performatives really do belong in a linguistic class of their own–puzzlingly so. Constatives are true or false and accept the truth operator; imperatives are not true or false and resist the truth operator; but performatives can be true or false while rejecting the truth operator (and other operators). A performative utterance is a statement-like speech act without being a genuine statement, so it has an ambivalent relationship to the concept of truth. What Austin really discovered is that the dichotomy between statements and non-statements is too simple: for some utterances are a bit like statements and a bit not like them. We shouldn’t operate with a dualism of the declarative and the non-declarative speech act, because performatives are genuine hybrids; they are neither one thing nor the other. They are a special class of sentences, but with affinities to other classes of sentence. Austin was basically right in his dispute with the levelers, but he exaggerated the distinctness of the performative utterance. Ironically, he was too wedded to a dichotomy in types of speech act. We need a trichotomy: declarative, non-declarative, and performative.

 

Colin McGinn

           

 

Share

Believing Zombies

                                               

Believing Zombies

 

 

Could there be zombies that believe they are conscious?  [1] They have no consciousness, but they erroneously believe that they do. That may seem possible if we think of their beliefs as implanted at birth or something of the sort: couldn’t a super scientist simply interfere with their brain to install the belief that they are conscious, as innate beliefs are installed by the genes? The belief is false, but that is no obstacle to belief possession. We may have an innate belief that we are surrounded by a world of external physical objects, but that belief might be false if we are really brains in vats. Similarly, zombies might have false beliefs about their mental world, supposing it much fuller than it really is.

            But the matter is not so simple: for beliefs need reasons. What reason could the zombies have for believing they are conscious? The reason we believe we are conscious is that we are conscious and this fact is evident to us–without that we would not have the belief in question. If the believing zombies were to reflect on the beliefs they find implanted in them, they would wonder what grounds those beliefs—what evidence there is for them. Finding nothing they would abandon their groundless beliefs, perhaps with a shake of the head at being so irrationally committed to something for which they have absolutely no reason. Minimal rationality would quickly disabuse them of their error; they would believe instead that they are not conscious, or possibly remain agnostic.

            It might be replied that consciousness is not necessary to ground belief in consciousness, only the appearance of consciousness is. The zombies have to be in an epistemic state just like our epistemic state except that we have consciousness and they have none—the appearance of consciousness without the reality. But this is contradictory, since the appearance of consciousness would have to be a form of consciousness: it would have to seem to them that they were conscious. For instance, it would have to seem to them that they have a conscious visual experience of yellow without having any conscious visual experience (of yellow or anything else). Surely that is impossible: seeming to have a conscious state is having a conscious state (of seeming). So the only reason they could have for believing they are conscious is that they are conscious, and they need a reason for that belief if they are to have it stably.

            Now it may be said that we are being too rationalistic about belief: people can believe things for no reason at all, without any evidence whatever. Couldn’t our zombies believe they are conscious because this is what they have always been taught or because of superstition or from wishful thinking? They want badly to believe they are conscious (it seems so undignified to be a mere zombie) and so they deceive themselves into believing it. Happens all the time: no evidence at all, but firm belief nonetheless. That sounds like a logical possibility, though it would be an odd case of irrational dogma or motivated self-deception. One problem is that irrational believers generally think they have reasons for belief, even though these putative reasons look hollow and unconvincing to everyone else. They will cite these reasons when challenged to defend their beliefs. But what will the zombies say when challenged? They can’t point to anything that even appears to look like consciousness, since that would imply that they have consciousness. People whose religion requires them to believe in miracles will cite certain natural events as proof of said miracles, however unconvincing these events may be as evidence of miracles; but our zombies have absolutely nothing to point to, since the mere semblance of consciousness is a case of consciousness. Their religion may require them to believe they are conscious, but they can point to nothing that could even be interpreted as consciousness, because they have no consciousness. An appearance of miracle may fail to be a miracle, but an appearance of consciousness is always consciousness. And nothing else could provide any halfway reasonable grounds for their belief. So we are left with the idea that they believe they are conscious without even believing they have any grounds for that belief.  [2] This gets us back to the case of beliefs that exist without even having any purported justification. All they can say when challenged is, “I simply believe it”. This is a difficult thing to make sense of because beliefs need grounds of some sort (they purport to be knowledge after all).

            We should conclude that zombies that believe they are conscious are not possible. Any being that believes it is conscious must be conscious. That includes us: if we believe we are conscious, then we must be conscious. This refutes an eliminative view of experiential consciousness: it cannot be that we lack such consciousness while simultaneously believing that we have it. We cannot be actual zombies under the illusion that we possess consciousness.  [3]

 

  [1] These are zombies with respect to experiential consciousness not zombies tout court, since they are stipulated to have beliefs. The intuitive idea is that they have no conscious experience and yet they believe that they do: for example, they think they have conscious visual experiences of colors, but they don’t have any such experiences.

  [2] They may have a sacred text in which it is written that zombies are conscious, despite the introspective appearances, and they may be brainwashed into accepting that text. But then the “belief” they have is really a matter of faith, since they have no direct grounds for the belief, even of the thinnest kind. They accept the text only because of their religion, not because they can offer any justification for the beliefs it recommends. They don’t really believe they are conscious, as they (rightly) believe themselves to be embodied believers. For that they need some sort evidence, even if it falls far short of what it is evidence for.

  [3] Some extremists have sought to deny that “visual qualia” (etc) exist, despite our firm conviction that they do exist. But it is simply not possible to believe in such things without there being such things, since they provide the only possible grounds for such a belief.

Share

Knowledge of Consciousness

 

 

Knowledge of Consciousness

 

 

How does our knowledge of our own consciousness differ from our knowledge of other things? Presumably it does differ: there is something unique about the way I know my own conscious states. There are many types of conscious state (event, process) and many types of knowledge of conscious states (knowledge-that, knowledge-of, knowledge-what, memory knowledge), but all such knowledge is united in being knowledge of consciousness. There is something distinctive about this knowledge: for example, I know my present visual experience of dappled sunlight in a special way. But what is that way?  I won’t be able to answer this question (that is part of my point), so my remarks will be limited to locating a problem.

            A traditional answer is that I am certain of facts about consciousness, whereas I am not certain of facts about the external world. I infallibly know my own consciousness. That is not wrong, but it doesn’t answer our question, because there are non-conscious facts about which I can also be certain, e.g. elementary logic and arithmetic. I don’t know these facts in the way I know my consciousness—they are clearly not facts of consciousness. The same problem applies to appealing to the concept of the a priori: even if knowledge of consciousness is a priori, that is not unique to such knowledge, but applies more broadly. Similarly, the concept of acquaintance won’t help: maybe I am acquainted with my own consciousness, but I am acquainted with more than that—with shapes and colors, as well as (according to Russell) universals. The same goes for the concept of transparency: we can grant that consciousness is transparent to its subject, but it is not the only thing transparent to the subject—what about basic geometry and logical concepts? My consciousness is evident to me, but it’s not the only thing that is. And these notions have nothing specifically about consciousness built into them: they are more general than that, not geared to the peculiarities of consciousness. We need to know what it is about consciousness as such that makes knowledge of it special. Why is this type of knowledge like no other, in a class of its own? We might even ask why it is nothing like other types of knowledge, being entirely and spectacularly unique. Surely that has been the feeling about knowledge of consciousness, but the usual epistemic concepts fail to do justice to its uniqueness. So again, what is it that sets knowledge of consciousness so apart from other types of knowledge?

            Here is another answer, by no means unfamiliar: it is the only type of knowledge that exhibits an asymmetry between one epistemic subject and another. Not only is it true that I know my consciousness with certainty (by acquaintance, a priori, transparently), it is also true that you cannot know it this way, and perhaps cannot know it at all. This is the old distinction between first-person and third-person access to consciousness: the dramatic asymmetry of knowledge as we move from one subject to another. No such asymmetry applies to knowledge of logic, arithmetic, geometry, universals, and whatnot. Moreover, it is in the nature of consciousness to exhibit this epistemic asymmetry—part of its essence. So isn’t this what makes knowledge of consciousness special? It is certainly the kind of answer we seek, because of its specificity–consciousness is uniquely such as to be open to one subject and closed to every other subject—but it won’t do as it stands. For we need to know in virtue of whatconsciousness exhibits the asymmetry—what explains it. Why is it so closed in one direction and yet open in another direction? Also, the theory is essentially negative in form: consciousness is not accessible to others in the way it is to oneself. But we want to know the nature of the knowledge we have of consciousness in its first-person openness—how exactly do I know my experience of dappled sunlight in a way that others can’t? What relation do I as an epistemic subject have to my own conscious states? Don’t say the certainty relation (or the acquaintance relation or the transparency relation)—that just brings us back to where we were. We need a more positive characterization of first-person knowledge of consciousness. Something in my mind (my faculty of knowledge) hooks up to something else in my mind (my consciousness) in such a way as to produce knowledge of consciousness, but what is this “hooking up”?

One has the sense, perhaps, of one thing leaping into the arms of another as nothing else can—that there is a snugness of fit here that is unique in the world. It is as if consciousness is made to be known, that this is its destiny, that it could be no other way—hence the impossibility of skepticism regarding our knowledge of consciousness. By contrast, the rest of reality is known only by means of epistemic exertion or contortion—that such knowledge requires effort and may fail (hence the real possibility of skepticism). Even if I know elementary arithmetic and geometry with certainty, that knowledge does not come to me without effort and risk—it is not given. It is acquired, secured, gained. But consciousness simply decants itself into my knowledge faculty, freely and unstintingly, with no obstacles or qualifications. It says, “Here I am, take me!” No other object of knowledge surrenders itself with such abandon (and these romantic metaphors are suggestive): everything else is coy, cagey, and reluctant by comparison. Consciousness offers itself without hesitation, on a platter, but even simple arithmetic exacts some epistemic cost—you have to think about it. Yes, I am certain that 2 + 2 = 4, as I am certain that I have a sensation of dappled sunlight, but in the former case ratiocination is required (or at least an act of insight or intuition), whereas in the latter case my certainty stems from something presented to me and not requiring anything of me. I simply know without effort or question that I have the sensation: there is no striving to find out, no mental concentration, no slight unease that I might be wrong. It is knowledge without anxiety or stress or expenditure of energy. The knowledge simply comes with the fact known, instead of calling on reserves of epistemic capital–some intellectual contribution, however minimal (it isn’t hard to know that 2 + 2 = 4). By contrast, third-person knowledge of consciousness requires real cognitive effort and is fraught with anxiety and risk: it requires diligence and determination. It is work to know another’s consciousness, maybe futile work, but for the one whose consciousness it is the job can be done lying down. It is not a job at all, not a task or project, but simply part of being conscious. The knowledge simply happens without your having to lift a finger: consciousness automatically updates you about itself free of cognitive charge. You don’t even have to do as much as cock an ear or slant an eye. 

            I hope these rhetorical flights carry some resonance, but they hardly constitute a theory. They may capture some of the phenomenology of knowing one’s own consciousness, but they don’t tell us what this unique epistemic relation consists in. Consciousness and knowledge come together somehow, with all the ease and naturalness I have described, but we still don’t know how—by what process or mechanism or miracle (and one can feel the temptation to go that way). There are metaphors to play with (an activity not to be despised), but no clear theoretical conception attends their use. So we really don’t know what makes knowledge of consciousness special, or how it works, or what it is, or what makes it possible. It is a familiar fact of conscious life, and something additional to mere ground-level consciousness, but it resists analysis or elucidation. I know that I know my consciousness in a special way, but I don’t know what that way is. I can’t get my mind around it. All I can say is that consciousness and knowledge are made for each other.  [1]

 

  [1] Other things are not made to be known—sometimes they seem made not to be known. Much of the world systematically eludes knowledge, or at least challenges it. The microstructure of matter is not made for knowledge. Knowledge is generally an achievement, sometimes against all odds, but knowledge of consciousness is a gift, a freebee, a no-brainer—it requires no intelligence and no effort. There are no examinations in consciousness knowledge (everyone would get an A).

Share

Performatives and Self-Reference

 

 

 

Performatives and Self-Reference

 

 

By uttering the words “I promise” a speaker can promise; he or she promises in virtue of uttering words. So we might expect performative utterances to allude to words as well as use them. Normally they do not take this form, containing no quotation or demonstrative reference to words. I say, “I promise to meet with you” and my utterance appears devoid of reference to words: all use, no mention. Yet we have the construction “hereby”, as in “I hereby promise to meet with you”. This seems to carry self-reference: I am saying that my promising is by means of my utterance. Others can report, “You promised to meet with me by saying ‘I promise to meet with you’”, and here the reference to words is evident. So I should be able to make things explicit in the same way, and the “hereby” construction suggests that I am incipiently doing just that. Can I also make the self-reference explicit?

            Surprisingly, it is not easy to do that, and it never happens in actual speech. Suppose I say, “By uttering these words I promise to meet with you”: this is not equivalent to the original performative and is obscure in sense. Which words—all of them or some? Let’s try this instead: “By uttering the words ‘I promise to meet with you’ I promise to meet with you”. This is even worse: it is not even clear that such a sentence can be used to make a promise. At best it might be taken to mean that uttering the words “I promise” is making a promise—which is not performative. Applying this kind of paraphrase to the performative sentence robs it of its performative power and turns it into a maladroit clunker. But how else could we make the self-reference explicit? If there is self-reference here, it is not like “This sentence is false” or “’Snow is white’ contains three words”. The performative with “hereby” in it works perfectly well, but if we try to unpack it by means of standard devices of self-reference we produce monsters. We seem to have in performatives an unusual kind of self-reference: the utterance alludes to itself indirectly, but it declines to expand into explicit reference to itself. It is a kind of coy or oblique self-reference. It doesn’t fit the standard examples of self-reference by means of quotation or demonstrative reference.

            If I say, “I promise to meet with you by uttering ‘I promise to meet with you’”, that sentence appears to mean only that my meeting with you will be expedited by my utterance of those words. We can’t convert the implicit self-reference of the original performative into an explicit paraphrase involving straightforward quotation. The self-reference is essentially implicit—yet another oddity of performative sentences. It is the same with other performative verbs such as “baptize”: “I hereby baptize you Mary” is fine, but “I baptize you Mary by saying ‘I baptize you Mary” is not fine. We nullify the act by referring to it; yet the act must be performed for the sentence to be true. If I say, “By this act I promise to meet with you”, I bring reference to the act of promising into my utterance, but then the utterance fails to carry its intended performative force. If I try to refer to the act of promising in my promise, I undercut my promise; yet that act must exist in order that I should promise. The performative is a peculiar beast—the platypus of speech acts.

 

C

Share