Dichotomous Knowledge

Dichotomous Knowledge

There is a long tradition of recognizing two distinct categories of knowledge: knowledge of logic, meaning, and mathematics, on the one hand, and knowledge of geography, history, and chemistry, on the other (these lists are not exhaustive). We might name these “A-type knowledge” and “B-type Knowledge”, so as to be neutral about the nature of the distinction. It is not obvious what the difference actually consists in. Looking for something more descriptive, philosophers have hit on a pair of Latin terms that seem vaguely evocative and have stuck, namely “a priori” and “a posteriori” (usually italicized to mark their somewhat esoteric meaning). If we try to dig into them, we do not return with conceptual gold: they translate as “prior” and “posterior”, as in “before” and “after”. The OED gives us “existing or coming before in time, order, or importance” and “coming after in time or order; later”, respectively. Before or after what, we ask. Not breakfast or lunch, presumably; no, before or after something called experience. That notion is not pellucid or theory-independent; it is usually specialized to sense experience, i.e., the five human senses. Here things start to get controversial; also, the reference to temporal order is questionable—does A-type knowledge always precede B-type knowledge in time? The original distinction is not so much clarified as immersed in definitional murkiness. Does this form of definition really capture what we intuitively recognize in the examples cited? Isn’t it more of a place-holder than a theory? Not surprisingly, other concepts have been invoked in order to hit off the distinction: experiential vs. non-experiential, analytic vs. synthetic, intuitive vs. observational, innate vs. acquired, abstract vs. concrete, causal vs. non-causal, rational vs. sensory. These all have their merits and demerits, which have been amply discussed; I think they are all lacking in one crucial respect—they don’t tell us about the internal structure of the two types of knowledge. They focus on the means of acquiring the types not their constitutive anatomy. They don’t say anything much, if anything, about the inner cognitive architecture of knowledge of A-type truths or B-type truths. They tell us about etiology but not about form—causation not constitution. We are trying to articulate epistemic natural kinds and to do that we have to look at the internal structure of the two types of knowledge. How do they operate, and what kind of structure permits that operation? To answer these questions is going to take some fresh thinking.[1]

B-type knowledge proceeds as follows: first, observations of particular states of affairs take place; then, inferences to generalizations are made. General knowledge is inductively inferred from perceptual knowledge of instances of the putative generalization. Singular instances precede general truths. The particular is anterior to the general: the former justifies the latter, while being itself justified by experience. That is the basic structure of what we call a posteriori knowledge. Notice that the singular premises are separate and distinct—one observation does not entail another. We may think of these as the atoms of this kind of knowledge, its ultimate constituents or building-blocks. B-type knowledge is thus atomistic and inductive (or abductive). I know that the Sun will rise tomorrow because I have seen it rise many times in the past, each of these a separate act of knowing. If we were to draw a diagram of this kind of knowledge, it would have an array of points on one side, corresponding to the totality of singular observations, and a bunch of arrows pointing right to the general truth thereby justified. The cognitive structure has this kind of shape and it operates by the depicted procedure; we might call it the singular-to-general structure, or the “sing-gen” structure. Experience is the medium in which this occurs, its enabling format. But A-type knowledge is not like that; in fact, it inverts this order of reasoning. Logic is the paradigm: we don’t know that modus ponens or the law of non-contradiction is logically true by inductive inference from individual instances; instead, we know the general principle directly and then deduce its instances. The order of justification is reversed. If we were to draw a diagram, it would have a general principle on the left with arrows pointing to individual instances on the right: it would have a general-to-singular structure, or a “gen-sing” structure. What we call reason or intellect is the medium in which this occurs. First, we know the generalization by what we call “intuition”, then we move deductively to its logical consequences. It is the same with mathematical and semantic knowledge: we don’t infer that every number has a successor from individual instances by induction, or that bachelors are unmarried by observing that similar propositions have always been true in the past (e.g., the proposition that spinsters are unmarried). We grasp a general principle and know it to be true, and then we deduce its particular instances. This implies that such knowledge has a holistic character in that the consequences are united by a general principle and are not known separately from each other. You can’t know that 4 has a successor but not know that 6 does, or that sisters are female without knowing that brothers are male (given that you have the concept). You know a general property of numbers and of meanings (that semantic containment generates analytic truths). The general is epistemically prior to the singular, whereas with B-type knowledge the singular is prior to the general. In addition, B-type knowledge takes the form of a bundle while A-type knowledge takes the form of a unified whole. The form of what we designate as a priori knowledge is aptly captured by a system of general axioms and theorems; but the form of a posteriori knowledge consists of a collection of singular instances linked inductively to a general proposition. The fundamental unit of empirical knowledge is the singular proposition, but the fundamental unit of a priori knowledge is the universal proposition. A-type knowledge is principle-based and holistic, while B-type knowledge is particular-based and atomistic. So-called a prioriknowledge is systematic and organized (“logical”); so-called a posteriori knowledge is fragmentary and haphazard, dependent upon the knower’s location and sensory acuity (“accidental”). We could say that the latter type of knowledge pertains to a totality of particular facts, while the former concerns a body of general principles. The two types of knowledge move in different “spaces”, proceeding by different methods. The cognitive apparatus is different in the two cases, differently structured.

The two types of knowledge are therefore opposites of each other. The faculties involved have contrasting forms. The process of knowledge formation accordingly varies in the two cases. It isn’t just that they have different types of causation—one is caused by experience and one isn’t—they have a different inner structure. The rest is extrinsic. Moreover, all varieties of the two types are unified in their structure: all a priori knowledge fits the “gen-sing” form and all a posteriori knowledge fits the “sing-gen” form. Thus, we can include metaphysics (and philosophy in general) and ethics in the category to which logic, mathematics, and semantics belong; and we can include knowledge of mind in the category to which geography and chemistry belong. For example, knowing the nature of identity belongs to the A-type category and so does knowing that cruelty is wrong; and my knowledge that I am in pain or that people have an unconscious belongs in the B-type category. There really are two large and distinct types of knowledge—two basic epistemic natural kinds. This is a philosophical discovery of the first importance. Trying to discredit is a fool’s errand. And it is an interestingdiscovery: we have two very different modes of knowing co-existing (and interacting) in our head—it’s surprising we have a single word (and concept) for both of them. Knowledge is deeply dichotomous. Some knowledge is constructed from separate pieces picked up by the senses—it is compositional, aggregative; but some knowledge is based on general principles that are recognized by the intellect and imply many (infinitely many) individual truths. They are polar opposites, yet both qualify as types of knowledge. Memory of particular facts is the hallmark of one, but not of the other (we don’t remember that 4 is even or that every number has a successor). Memory is atomistic and selective; reason is holistic and inclusive. If anything, we have underestimated the differences between the two types of knowledge, as if they only differ in their origins and not in their intrinsic character—innate versus acquired, derived from experience or the result of intuition, based on the senses or the intellect. But actually, they differ in their fundamental modus operandi and internal logic (induction or deduction, derivational structure). It’s not a matter of what comes first but of the deep nature of what comes.[2]

[1] I published my first paper on this subject nearly fifty years ago: “A Priori and A Posteriori Knowledge” (1976). I am still thinking about it.

[2] This paper is quite programmatic; a more thorough treatment would require looking in detail at specific areas of knowledge and teasing out the role of the singular and general within them. The kinship between the senses and the singular, on the one hand, and that between the ratiocinative and the universal, on the other, is however quite intuitive and traditional (see Plato). Just consider the contrast between the propositions of pure geometry and ascriptions of shape to particular objects. Generality is at the heart of the a priori, while the a posteriori has trouble escaping the bonds of the particular (witness the problem of induction).

Share

The Survival of the Fittest?

The Survival of the Fittest?

Herbert Spencer’s phrase “the survival of the fittest” has done a lot of mischief, not only in biology, but also in politics, economics, ethics, history, and education. The phrase is riddled with confusion, ambiguity, and tendentious error. I will indict the phrase, first its use in biology and then in other fields, particularly economics. We can start with some trenchant remarks from Keynes: “The philosophers and the economists told us [in the nineteenth century] that for sundry deep reasons unfettered private enterprise would promote the greatest good of the whole. What could suit the business man better?… Thus the ground was fertile for the doctrine that…state action should be narrowly confined and economic life left, unregulated so far as may be, to the skill and good sense of individual citizens actuated by the admirable motive of trying to get on in the world…  The economists were teaching that wealth, commerce, and machinery were the children of free competition—that free competition built London. But the Darwinians could go one better than that—free competition had built man. The human eye was no longer the demonstration of design, miraculously contriving all things for the best; it was the supreme achievement of chance, operating under conditions of free competition and laissez-faire. The principle of the survival of the fittest could be regarded as a vast generalization of the Ricardian economics.”[1] Accordingly, Darwinian biology could be touted as the foundation and rationale of individualistic capitalist economics, and hence anti-socialist politics (as well as egoistic ethics). The phrase “the survival of the fittest” was the expression of solid biological science—and hence of the general nature of life on earth. But the phrase was not much scrutinized, falling off the lips as it does—it sounds like common sense made rigorous. So, let’s ask what it means and whether it describes anything actually true in the biological world.

The phrase is evidently intended to express the idea that the “fittest” organisms live longer than the less fit ones: fitness produces longevity. An individual organism will survive longer than other individuals with which it is in competition if it has a higher degree of fitness. But what is fitness? The OED gives us the following for “fit”: “in good health, especially because of regular exercise”. This is the use of “fit” we employ when we say “You are looking very fit” or “Are you keeping fit?”. We may thus paraphrase Spencer’s phrase as “the survival of the healthiest, especially because of regular exercise”. Those animals survive best that are, and keep, fit—that are “in good shape”. So, now we know what the phrase means, pretty much, but is it true? And let us observe that it is intended to exclude other factors that might be thought to lead to survival—such as being designed by God, or well-connected, or of superior breeding, or notably virtuous. Survival depends solely on fitness—on physical good health. If you want to know whether a given organism will survive longer than its rivals, then you need check its physical fitness and nothing else. However, this test is neither necessary nor sufficient for survival, or even its relative probability. Not necessary because not all organisms are fit in the stipulated sense: are plants fit or bacteria or jellyfish? Do they exercise regularly? Do some have stronger muscles than others, or get out of breath less easily? Of course not, so their survival is not a matter of being more fit than other organisms in the customary sense. Here we might appeal to the definition in terms of “good health”—plants etc. can be more or less healthy. But what is the criterion for good health? It had better not be “conduces to survival” on pain of generating a tautology (“organisms survive best by having traits that conduce to survival”). To avoid this kind of trivialization we need to specify what kind of trait is “healthy”—such as muscular strength, or speed, or flexibility. But these are not universal traits of organisms subject to evolution by natural selection. Many organisms are not fit at all if we mean by “fit” what the dictionary says.  But what else could we mean? Isn’t that definition exactly what we have in mind when we hear the phrase “survival of the fittest”, not noticing that it applies only to a subclass of organisms (mainly humans). And what of rich well-connected humans who are sickly and bed-bound but supported into old age by wealth and privilege, while fit strapping youths die in battle at an early age? It all depends on your circumstances, your environment. Physical fitness is surely just one of many factors that contribute to longevity (in favorable circumstances)—what about intelligence, sociability, cunningness, good looks, an optimistic temperament? Individual physical differences are not the only things that affect survival; the mind does too. Mental qualities are not types of fitness yet they have an impact on survival. We need a more inclusive general notion than “fitness” if we are to capture the full range of survival-conducive traits—without falling back into the tautological “anything that contributes to survival”. The truth is that there is no such general notion, which is why “fitness” is so regularly (and uncritically) invoked.

The upshot is that there is no law of biology that specifies what trait an organism must possess in order to survive, or have the probability of its survival raised. The fitness formulation attempts to specify such a trait, but it fails to say anything true (even approximately). There is no law of the form “All organisms must have trait T in order to survive”. There are many traits that can lead to survival (or its opposite) with nothing significant in common, and many kinds of environmental context that affect the efficacy of those traits (being fit and strong won’t help you survive long in a gladiatorial culture, compared to your less pugilistic compatriots). There is no single trait that is correlated lawfully with survival—not fitness and nothing else. The world is too messy and complicated for that. Nor does Darwinian biology require the existence of such a law. Natural selection acts variously and contextually. So, there is no scientific law of biology that that can be invoked by other disciplines in order to confer respectability on their own predilections. Darwinian biology is not a prelude to laissez-faireeconomics and individualistic capitalism. In particular, there is no biological invisible hand that ensures that the laws governing survival necessarily produce the “fittest” future populations. Natural selection does not select for something called “fitness”, i.e., the state of being fit; it does not increase the level of bodily vigor in the organisms it operates on, or some such thing. Human nature, for instance, is not the result of a law that leads inexorably to an increase in something called “fitness”—we are not the fittest of all creatures (many of us are not fit at all). If being fit is thought to be a perfection, there is no law of biology that leads to a more perfect species. There is no “the survival of the X-ist”, where X is some general trait common to all organisms. There is no law of this form leading inevitably to improvement, progress, a more perfect world. So, there is no law of nature we can rely on to do what an intelligent designer is supposed to do. There is no substitute in nature for what God was supposed to ensure. Evolution could lead to a worse world, a less “fit” world. Natural selection just selects; it doesn’t select for some admirable trait like fitness. When the dinosaurs were wiped out it wasn’t because they weren’t fit—that they didn’t take regular exercise or mind their diet. In the biological world, shit happens; it isn’t all a steady accumulation of fitness, viewed as a type of perfection. It isn’t that evolution will inevitably lead to organisms comparable to Olympic athletes. It isn’t that in the end all animals will be incredibly fit (or incredibly smart or sexy or moral). That is all mythology supported by a rickety phrase invented by a chap named Herbert Spencer in the nineteenth century.

Now we must talk about the appropriation of the offending phrase by economists and sundry others. The idea is that entrepreneurs and their products are subject to the same law that Darwin enunciated and Spencer christened: they survive according to their “fitness”. They compete with each other, subject to no outside authority, in the quest to make a better product, and the best man wins. In this they are just following the dictates of nature, engaged in a battle for survival. But that survival will inevitably bring with it a rise in the quality of the world, i.e., better goods and services. The fittest wins, survives into the future, outclassing the opposition. The business man is like a tiger in the wild, ruthless but bent on benefiting the world by being super-fit. Just as Darwinian laws benefit the species (allegedly), so economic laws benefit society as a whole. The trouble is that there is no such biological law, so that we will need to look elsewhere for a defense of laissez-faire economics. I won’t discuss all the problems of laissez-faire economics—I am not an economist and they have been amply discussed by experts[2]—I wish only to point out that biology provides no rationale for such an outlook. The same is true for politics, morality, history, and education. No respectability accrues from any supposed analogy to orthodox biology to certain doctrines in these areas, because the whole idea of the survival of the fittest is shot through with difficulties. There simply is no well-defined notion of fitness applicable to all organisms such that that trait is selected for in the battle for survival. There is just survival and its absence, aided by a large range of traits each suitable for the species that has them. We can certainly say that those organisms survive that are the fittest to survive, but that is a mere tautology and does not include the ordinary notion of fitness; here “fittest” means “cut out for” not “physically fit”. The phrase has survived as long as it has by dint of ambiguity, vagueness, and suggestiveness, not by denoting a well-founded piece of science. It should be retired from civilized discourse.[3]

[1] This is from Keynes’s 1926 essay “The End of Laissez-faire”.

[2] Keynes has a nice discussion in “The End of Laissez-faire”.

[3] It isn’t that markets don’t operate like species, though they don’t; it’s that species don’t operate like species, as conceived by the survival-of-the-fittest trope. This is really a meme not a scientific theory. Nor is survival the essence of the matter; reproduction is. It should be obvious that the contemporary biologist’s notion of fitness, defined in terms of quantity of offspring, has nothing to do with fitness in the vernacular sense invoked by Spencer’s phrase. There is a lot of confusion surrounding biological terminology here.

Share

Is Consciousness Shrinking?

Is Consciousness Shrinking?

What is the ratio of the conscious mind to the unconscious mind? Is the human unconscious half the size of the conscious or twice the size? What percentage of mental activity is carried out unconsciously and what consciously? Does this proportion vary between species? Which species has the largest unconscious mind relative to its conscious mind? If you are a believer in the Freudian unconscious, do you think other animals also have a Freudian unconscious, and is it as extensive as the human unconscious? So far as I know, such questions have never been broached. There must be a capacious linguistic unconscious in humans, given the nature of our linguistic competence, but do animals that employ various kinds of signal system likewise have a large signaling unconscious? Does the unconscious expand over the course of a lifetime? There is reason to think that it does, because it is common for tasks that begin with conscious mental activity to gradually become taken over by the unconscious. This is because of considerations of economy: the conscious mind is easily overloaded and it is more economical to shift the burden to unconscious processes; the machinery gets moved to the basement. Consciousness is notoriously slow compared to the unconscious. Just think how slow and cumbersome speech would be if it had to be figured out consciously—or walking, driving, throwing. The involvement of conscious activity in these skills shrinks as the skill is acquired; that part of the conscious mind falls away to take up residence in the unconscious mind. To that extent and in that sense the conscious mind gets smaller—less full. The ideal is to be able to perform the skill while your conscious mind is a blank, so that you can daydream and let your mind wander. Conscious activity is not required to perform the task in question, so it has a tendency to disappear, handing the responsibility over to the unconscious. It is no longer actively involved.

Suppose there was a mind that began with a zero unconscious and a jampacked consciousness, but then over time the process of shifting responsibility to the unconscious proceeded apace. More and more of mental functioning is delegated to the unconscious, leaving the conscious mind free for more agreeable occupations—listening to music, fantasizing about movie stars, meditating as vacantly as possible. There would be consciousness shrinkage and corresponding unconsciousness expansion. This might reach the point that consciousness had hardly anything to do, and nothing vital to the well-being of the conscious subject. It might exist only as a light buzz, or not at all. Suppose that is the normal course of the lifetime of an organism constructed like this: a childhood of brimming and taxing consciousness, followed by a middle age of relative conscious relaxation, ending in an old age of virtual unconsciousness. The unconscious has taken over all the jobs that used to be done by the conscious. There don’t seem to be any actual species that develop like this, but there could be, logically speaking. Such a species has a shrinking consciousness; it undergoes consciousness atrophy or downsizing. It was once big and now it is little. It slowly becomes unnecessary. Then, assuming all this, my question is: Could this happen to consciousness as it exists in the biological world? Could it gradually fade away over evolutionary time? Is it destined to disappear as the unconscious takes over its functions? Might consciousness be only a temporary feature of evolutionary history on this planet? Could natural selection eventually phase it out in favor of more efficient unconscious processes and mechanisms? Might the unconscious mind take over completely? It is, of course, extremely difficult to obtain evidence as to whether consciousness has been expanding or contracting over evolutionary time, and ditto for the unconscious. How could we empirically determine whether the conscious mind has been ceding territory to the unconscious mind? But it doesn’t seem wildly implausible to suppose that the dinosaur mind, say, was heavily tilted in the consciousness direction: most of what went on in it was conscious, with only a minimal unconscious (nothing Freudian or Chomskyan going on). The mind that evolved early on was largely a conscious mind; only after eons did it grow a subsidiary unconscious, so as to avoid the burden of a crammed consciousness. The unconscious is a fancy adaptation, developed rather late in the game, designed to relieve consciousness of too much responsibility for organizing behavior. First the conscious, then the unconscious—as with skill acquisition. It strikes me as very likely that the human unconscious is by far the largest in the animal world, but that our consciousness is relatively confined. Certainly, our sensory consciousness is more limited than that of many animals—just consider our relative poverty with respect to sounds, smells, eyesight, and possibly taste (though we seem pretty discriminating in this respect). The elephant’s consciousness might well be larger than ours, but I doubt there is much going on in the elephant’s unconscious. There has been a trend towards smaller animals since the time of the dinosaurs, and maybe the same is true of the size of consciousness (with a corresponding increase in the dimensions of the unconscious). Might this trend continue until consciousness is replaced by purely unconscious mental processing, or by some negligible remnant of consciousness as it exists today? Might consciousness become extinct like so many biological adaptations? Has the hominid line been slowly shedding its earlier glorious consciousness in favor of a more streamlined and efficient unconscious? Are we less conscious than we used to be, more zombie-like? Is there less that it’s like to be us? The hypothesis does not seem beyond the bounds of possibility.

One future scenario is that human technological ingenuity might hasten this progression—I mean AI. What if devices are invented that can slot into the brain to take over many of the tasks now executed by the conscious mind? We might well welcome these as reducing the tedium that consciousness regularly courts (tax forms etc.). What if we could vastly improve our efficiency by inserting these devices, creating a kind of machine-based unconscious? Wouldn’t people want that kind of advantage? The end result could be a massively reduced consciousness, or no consciousness at all. We might retain some remnants of conscious pleasure for old time’s sake, but otherwise we hand things over to our man-made unconscious. AI becomes the form that the new unconscious takes. After all, consciousness was always slow and glitchy, prone to breakdown and depression, so we might be better off without it. On some remote planet this might already have happened: the inhabitants were once conscious and fleshy, but now they have gone fully unconscious and partly mechanical. Consciousness just wasn’t cutting it for them, so they phased that junk out. They de-conscioused. In their history, consciousness came–and then it went. Consequently, they don’t see themselves as raising a difficult mind-body problem, since there is no consciousness to be puzzled about. There is nothing it is like for these robotic beings. And is it even correct to describe them as robotic? Maybe the unconscious mind that pulses within them is a sensitive and sophisticated thing, morally sound and peace-loving; it may be soulful, creative, family-oriented, and kind to animals. Who knows what evolution can bring? Consciousness may not be all that it’s cracked up to be when it comes to intelligence and moral behavior. People still operating with the old conscious brain might be looked down on as primitive and hidebound—they need to get with the program. Eventually consciousness becomes a distant memory, darkly spoken of in ancient texts; the universe has gone completely unconscious, though still mentally rich (the unconscious being a type of mind). In the history of the cosmos, consciousness was born, grew, and flourished; then it began to shrink, giving way to superior forms of mentality, eventually disappearing entirely—a mere blip in the cosmic drama. To us it seems central, crucial, infinitely valuable, but maybe it will turn out to be just another discarded evolutionary gimmick destined to be superseded. Even now it may be on its way out. It may follow the fate of the dodo.[1]

[1] Another possibility is that some species retain consciousness in some form while others discard it. Reptiles may stay conscious in their modest way, but mammals abandon consciousness in favor of a supercharged unconscious. The most advanced animals move on to the new biological reality while the more pedestrian types stick with the consciousness game. The most successful species are thus the unconscious ones. Or plants become conscious and animals cease to be. Evolution is nothing if not creative.

Share

Reality and Appearance

Reality and Appearance

Appearances are part of reality, even when they are illusions; they are real things, no less so when not representing reality correctly. But is reality part of appearance? Certainly, not all reality is presented in appearances, since some parts of reality have appeared to no one (unless we include God). But is reality everpart of appearance—does reality ever appear? I will argue that reality never appears as it is in itself. In its actual nature it never appears; it never appears as it is. No aspect of it ever appears as it really is. This means that we never experience reality as it is in itself in any respect. It would be widely agreed that some of its appearance is foreign to it as it is in itself (e.g., secondary qualities); I am suggesting that all of it is strictly outside the scope of appearance. For example, no one has ever seen an object as it is in reality, though it is a certain way in reality. The reason for this is simple, indeed truistic: appearances are always viewpoint-relative, but reality is not. The way the physical world is in itself is independent of any viewpoint; no viewpoint is built into it. But viewpoint is always built into appearance, necessarily. The easiest case is spatial perspective: we sense things from a position in space, but things in space don’t themselves have any spatial perspective. Perspective belongs to the appearances but not to the reality that appears. In other words, perspective is subjective (subject-relative) while reality is objective (not subject-relative). Appearance contains a point of view; reality does not. There is no point of view in reality, just brute existence—being in space is not a matter of being perceived to be in space. This is a fact about reality—a metaphysical fact—but it is a fact that cannot be represented in perceptual appearances. Thus, reality could never be perceived just as it is: the way it is for no one cannot be captured in an appearance to someone. Nor could it be captured by the universe itself, since the universe does not appear to itself. If it did, it would have a point of view, thus undermining its claim to complete objectivity. To put it differently, sense-data cannot represent objective states of affairs objectively. Sense-data may be caused by objective states of affairs (no doubt they are), and in that sense may be said to be de re objects of experience, but they cannot represent how the states of affairs are in themselves and only in themselves. That, indeed, is a conceptual truth.

At this point questions crowd in. Is this conclusion true only for perceptual experience but not for appearance in thought? What does it imply about human knowledge? What about Berkeley? Is it true for numbers and mental states? I will consider each question in turn. The first question is answered by observing that thought and perception are not unconnected: to the degree that our thoughts reflect our perceptions, they too are incapable of representing reality as it intrinsically is. Even a mild version of empiricism will generate this result. Certainly, we can’t imagine the real world without any tincture of our world of subjective appearance, because imagination tracks the senses. And isn’t it true that if we try to focus on our thoughts about the physical world, we find that we are forced back to a perspectival conception of reality? If I try to think of the world from noperspective, I find that I don’t know what I am thinking—my own self keeps intruding (hence the temptations of solipsism). Every conception of things is a type of view, but there is no such thing as a view from nowhere. We ask about someone’s views, tacitly conceding that they are locked into a certain perspective, even if it is just the human (or mammalian) perspective. Reality does not come to us neat but diluted by our own sensibility, sensory or intellectual. The phenomenal world is our world; we can’t grasp reality noumenally. So, thought cannot escape the tyranny of the appearances. At the limit we are subject to the tyranny of intelligence (reason, intellect) in that we see things from our cognitive vantage point—our concepts, our logical faculties. The subject cannot escape himself altogether. But reality is under no such constraint, simply possessing being itself, knowing subjects be damned. Thought, by contrast, cannot escape its status as thought: thought is always present to itself, intrusively so. Thought is not transparent. There is no such thing as a fact being embedded in thought just as it is, neither more nor less. Thought always adds and subtracts, because thought is a form of appearance (the intellectual form). A physical fact thought about is always thought about under a mode of presentation, but facts themselves have no mode of presentation—they are pure reference, so to speak. Being is not being for anyone. There is thus a fundamental mismatch at the heart of all mental representation. The mind never encompasses reality just as it is in itself. The idea is contradictory.

Does that mean that human knowledge of reality is impossible? No, because knowledge does not require complete objectivity; it requires, rather, a type of tracking. The knowing mind must correspond to reality, reliably, deeply, but it need not be reality—as if only the object itself can truly know its own nature. Knowledge does not require identity between subject and object, only correlation. Knowledge is true justified belief—a type of mapping—not the upload of world into mind. Scientific knowledge is not compromised by the admission that it cannot describe the world in completely unadulterated objective terms, as if the knowing subject has somehow disappeared. The knowing mind never collapses into the world; it parallels it. Maybe we have a fantastic ideal of knowledge in which the mind is invaded by the world as it objectively is, setting up camp in it as it were; but realistically, knowledge cannot aspire to such an encounter–it must be content to provide an atlas of reality, a guide. No one ever contended that x knows that p if and only if x’s mind apprehends reality as it objectively and intrinsically is: that is far too strong a requirement. Scientific realism does not require that reality should enter wholly and directly into the scientific mind, like a shoe in a box. It does not require an epistemology of containment.

What about Berkeley? It might be supposed that a Berkeleyan metaphysics could prevent reality from eluding the clutches of appearance. If reality is appearance (idealism), then it cannot lie outside of appearance; it must be a type of appearance, viewpoint and all. And surely, we can grasp appearances! But remember that reality for Berkeley is appearance in God’s mind not just any old mind (yours, your neighbor’s); and therefore, the kind of appearance that constitutes reality is not like ordinary human appearance. Do we really grasp what it would be for something to appear thus-and-so to God? Do we grasp the full reality of divine appearances, and not from our own limited perspective on them? Doubtful: so, we don’t have a conception of reality in Berkeley’s system that allows it to be captured by our human appearances—it transcends them. It probably transcends them even more than material substance—it is further from our natural modes of comprehension. At any rate, such a metaphysics doesn’t render reality one whit closer to what can enter into human representations; we are not acquainted with a reality so conceived, as we are not acquainted with ordinary objects under metaphysical materialism (i.e., the doctrine that physical objects are material not mental).

I just said that we know our own appearances—that they appear to us just as they are in themselves. But is that really true? This raises the broader question of whether any elements of reality appear to us just as they are, neither more nor less. Does pain, for example, appear just as it is intrinsically? That certainly seems like a more appealing proposition than it does for physical objects outside the mind, but on closer inspection it too comes to seem questionable. For it is arguable that pain also has dimensions that don’t show up in its appearance to us: it may have an underside that escapes our introspective awareness. What about its functional and cerebral properties? These don’t reveal themselves to our powers of introspection, so the first-person appearance of pain is not a mirror of the full nature of pain. And when we include these physical features by taking the brain and behavior into account we are back with elusive physical facts. Is it even clear that we have a complete picture of the phenomenology of pain just from our ordinary introspective awareness? Maybe there are details and similarities that are not immediately apparent to us: the reality of pain’s phenomenology might exceed the appearances it presents to us. After all, first-person introspection is just one perspective on pain, though doubtless a central one; pain itself might have a subjective reality that goes beyond such a perspective. It is a constituent of objective reality as well as a conspicuous presence in my subjective image of the world. It has being-in-itself as well as being-for-me.

Lastly, what should we say about mathematics? When I think about numbers do I grasp their objective reality? I know truths about them, to be sure, but do they offer their whole being to my cognitive faculties? I don’t view them from a particular spatial perspective, so it isn’t as if I falsify their inner nature in the way I do with concrete objects. But do I really perceive (intellectually) their actual intrinsic nature? Is there no more to their intrinsic nature than what appears to my mind? That seems hard to maintain: we don’t even know whether they are abstract, mental, or notational! We are ontologically myopic with respect to numbers. What if they exist in Platonic heaven right next to the Form of the Good—is that any part of our normal encounters with numbers? Scarcely. We may have a very partial and biased picture of mathematical objects; their reality may differ significantly from their appearance to us. Thus, I am inclined to believe that number appearance does not fully disclose number reality, though our knowledge of truths about numbers is one of our stronger areas of knowledge. It is hard, then, to escape the conclusion that reality never coincides with appearance. Appearances always omit aspects of reality as well as impose aspects alien to the thing itself. Our very concept of reality is too rarified for comfort, though indispensable.[1]

[1] Hume would say that we have no impression of subject-independent reality corresponding to our putative idea of it. The idea is thus under suspicion of emptiness. That is no doubt too strong, but it is true that the idea is unnervingly abstract and disturbingly noumenal.

Share

Emotional Logic

Emotional Logic

The idea that emotions are exempt from logic is widely received. Emotion is supposed to be where logic breaks down, where the mind eludes logic’s inexorable grip. It is the domain of the unruly, the irrational, the unprincipled—a kind of mental anarchy. This is wrong on several levels. Of course, emotions are subject to logic: it is not logically possible to both love and not love the same thing at the same time. We can reason logically about the emotions, as we can about anything. The emotions do not constitute an extra-logical domain. Nor is it true that emotions lack a rational basis: they arise from solid biological roots (supplemented by learning) and serve a biological function; they enable their possessor to survive better. Emotions are not anti-prudential or somehow self-destructive (pace Mr. Spock).  They are useful adaptations just like intelligence or the ability to think logically. Sure, they can misfire and be disruptive on occasion, but they are not glitches or gremlins in our psychological economy. But is there anything logic-like in their internal geography—do they exhibit logical behavior? One point, frequently remarked, is that the range of emotions exhibits a certain kind of logical structure: emotions come in opposed pairs, rather like assertion and denial. Thus, love and hate, delight and disgust, attraction and repulsion, security and fear, calm and agitation, admiration and contempt, like and dislike, interest and boredom, happiness and misery. There are opposed poles of emotion, not unlike truth and falsity. Hence, we speak of positive and negative emotions. Emotions form a pattern, an intelligible structure, a kind of matrix. The mind (heart, soul) moves around this matrix in intelligible ways; it dances to the music of the emotions. We all understand this structure; we have a basic competence in its categories. We grasp its grammar. We speak its language. It isn’t just a haphazard assemblage of arbitrary urgings having neither sense nor logic. It isn’t just mental chaos.

But is there an emotional logic as there is a propositional logic or a quantifier logic or a modal logic or a tense logic or a deontic logic or an epistemic logic? That may seem like a bridge too far in the quest to redeem emotion from its reputation as logically inept (or sublimely transcendent). Surely, there cannot be a formal logic of emotion! Emotion words cannot function as logical operators, logical constants in a formal system of deduction. None such exists, and with good reason. What would a logic of love even look like? Could the concept of disgust play the same kind of role as the concept of necessity in modal logic? Could there be a logic text, littered with logical symbols, for happiness? Actually, I don’t see why not; in fact, it is surprising that such a logic does not already exist, since it follows the same pattern as those other types of logical system. Affect logic is a thing; it merely awaits codification. First, we need some logically true axioms expressed in conditional form. I will consider a system that employs two basic operators (there could be other such systems): L and F, where L is intuitively love and F is fear. Think of these as analogous to necessity and possibility. Then we have this axiom: if LLp, then Lp, i.e., if x loves to love y, then x loves y. I write the axiom with a propositional variable so that L and F can be seen as sentence operators like logical negation—x loves to love that p. The converse will not be an axiom, since someone might love another without loving to love this other. Love, like necessity, can be iterated indefinitely (if pointlessly): x loves to love loving y. We get a similar axiom for F: if FFp, then Fp, i.e., if you fear to fear something you also fear it (if only dispositionally). Again, the converse does not hold. We can make the same points about sadness, happiness, delight, etc. We have recursive operators that generate logical axioms. Accordingly, we have logical deductions employing these operators.

Are there any logical axioms containing both L and F? These would be analogous to modal axioms containing necessity and possibility. There ought to be such axioms because of an incompatibility between love and fear: we don’t love what we fear or fear what we love (in the normal course of things). We want to be close to the loved object but not the feared object. Thus, we have the axioms: if Lp, then not Fp; and if Fp, then not Lp.  Can we fear to love something? If so, we could add the axiom: if FLp, then Fp—if we fear to love something, then we fear that thing. Is that a logical truth? Why would we fear to love something unless we already feared it (heroin, say)? I won’t try to adjudicate the issue, merely remarking that it resembles similar questions about iterations of other operators (if it’s possible that p is necessary, is p thereby necessary?). In any case, we do have some plausible-sounding necessary truths in the axioms mentioned above, since love and fear are natural opposites. We approach what we love and we avoid what we fear, as the psychologists say; and these two kinds of behavior are not compatible. So, the logic of emotion might have some interesting complexity aside from the truisms already cited in the previous section. There is enough logical structure in the concepts to permit a logical treatment of emotions. As I noted, this is not terribly surprising in the light of the standard examples of non-classical logics; it isn’t hard to get a logic off the ground, since concepts tend to cluster in logically connected groups and iteration is a common feature of language (“not”, “know”, “necessary”, etc.). If you can love to love and fear to fear, then you have the foundations for a logic of these emotions. Emotions are naturally logical in their possible combinations: some imply others and some imply the absence of others.[1]

[1] Emotions are rather like colors in this regard: colors too exhibit logical relations of implication and exclusion. This logic cannot be reduced to a classical logic of truth functions (as Wittgenstein discovered). I can imagine the same hostility to emotional logic as greeted modal logic in its early days. I look forward to a formal semantics of emotional logic.

Share

Emotion and Logic

Emotion and Logic

Are they compatible? Is it possible to be an emotional being and a logical being? More exactly, is it possible to be perfectly logical but also emotional? Is emotion always the enemy of logic? In Star Trek we find a defense of the thesis that emotion and logic are incompatible, or an assumption of that thesis. Mr. Spock is the epitome of the logical, but he also lacks all emotion—the latter being a necessary condition of the former, allegedly. Meanwhile, alongside this exemplary Vulcan, we have the humans Captain James T. Kirk and Dr. McCoy, both emotionally charged and not always logical—Kirk being capable of “intuitive” inspirations, McCoy often lapsing into well-meaning blather. The idea is that Spock represents pure logic at the cost of a lack of affect: you can’t be entirely logical in your thinking if you are prone to emotional outbursts (or inbursts). The thought behind this idea is that emotion is like a wind that blows you off the logical course: logic points you in a certain direction, but emotion deflects you from this course, with no good result. Thus, an incompatibilism obtains in the human psyche: between our logical nature and our emotional nature. You can’t have both (like free will and determinism, supposedly). The logical person is thought emotionally cold, even dead; the emotional person is deemed irrational and foolish. We are at psychological war with ourselves, pulled in opposing directions. I think this conception is completely mistaken: there is no incompatibility, or no deep incompatibility. Granted, emotion can be the enemy of logic—it can play a deflecting role—but it is false that it must be. It is perfectly possible to be a person of emotion, even strong deep emotion, and still be flawlessly logical. There could be highly affective Vulcans (but none on Star Trek alas).

Consider love (hate would make the same point): it is not inherently irrational or anti-logical. It might be rationally warranted by its object and it can lead to desirable consequences. Many a marriage is prompted by rational love and leads to a happy relationship. Doing what makes you (and others) happy because of an emotion of love is not an illogical way of proceeding. It is true that love (more likely hate) can make people act irrationally (“illogically”), but that is hardly a necessary truth. Similarly, the emotion of fear can lead to irrational actions, but fear is not itself irrational—indeed, it is a rational way to build an organism that avoids danger. Fear is like pain; and pain is not irrational. Aesthetic emotions are not somehow illogical, though they may be non-logical, i.e., not the result of logical reasoning. If you find a sunset beautiful, you are not thereby irrational; there is nothing wrong with your reasoning. If you thrill to the taste of blackberries, you are not being illogical (is everything tasteless to Spock?). Needs and desires are not illogical just because they are not the result of logical reasoning—any more than bodily traits are. This is obvious, though contrary to the incompatibilist thesis. To be sure, emotions may interfere with logical reasoning, but it is not true that they necessarily do—they are not inherently contra-logical. Love and logic can happily coexist (as can hate and logic). No one aboard the Enterprise ever makes this point to Spock—why not?

But a stronger thesis can also be defended: logic requires emotion (as free will, properly understood, requires determinism). You can’t be a logical being without also being an emotional being. Spock is thus either logically impossible or covertly emotional (as he is often suspected of being by his shipmates). Perhaps Spock is really just differently emotional—alienly emotional. Emotional make-ups don’t come in a single form; it is anthropocentric to suppose otherwise. Why do I say this? Couldn’t there be a completely unemotional logic machine? But a machine is not a man: the question is whether a conscious living being could be both logical and devoid of emotion. Could someone be capable of logical reasoning but not capable of any kind of emotion? I doubt it, because reasoning is a goal-directed mental action and hence requires motivation—a point, an end. It needs a reason. Nor is it hard to see what kind of reason might motivate logical thought: the desire to arrive at the truth, the pleasure of reasoning skillfully, a love of logic itself. We are not emotionally dead when we reason logically, any more than when we do mathematics or science. Spock clearly loves science, in his quiet undemonstrative way. He enjoys solving problems (he plays a lot of vertical chess). There are intellectual emotions—emotions of reason. These emotions may not rage and torment, but that doesn’t mean they don’t exist—they are simply another type of emotion. Without them logical reasoning would not be reasoning as we know it; indeed, it is unclear whether it would be any kind of reasoning. Even a disembodied mind employed only about logical problems would feel something, even if it is just a mild feeling of interest or occasional frustration. The idea of the soulless scientist or logician, completely devoid of emotion of any kind, is a myth—though his or her emotion may not much resemble those of a warrior or street vendor. I wouldn’t be surprised if Spock has quite a rich emotional life, though one markedly different from his human comrades. He is on the emotional spectrum, albeit at the far end. It is possible to be passionately logical, as Spock apparently is (compare Bertrand Russell).

The case may be compared with morality. Suppose you are a convinced moral cognitivist. That doesn’t mean you believe that moral reasoning is a dry desiccated business; emotion may suffuse or surround the mental acts involved. Kant spoke of his emotion of awe in contemplation of the moral law (and the starry heavens), and that emotion no doubt clung to his moral thinking, despite its rationalist cast. The same kind of feeling may attach itself to logical thinking—just think of Frege’s reverential attitude to logic or Wittgenstein’s logical mysticism. Morality and logic both stimulate emotions of reverence and exhilaration, respect and elation. Both involve the normative and non-natural, the sense of transcendence (the “sublime”). There is also the simple emotion of being pleased with a piece of moral or logical reasoning. These are not affect-free zones despite their “cognitive” status. You don’t have to be a card-carrying emotivist to believe in moral and logical emotions. Clearly, too, prudential reasoning has its affective dimension: you can’t be rationally prudent and not experience emotions proper to the subject at hand (disappointment, satisfaction). We can’t detach affect from intellect; the reasoning faculty is not cut off from the feeling faculty. Nor is this to its detriment, but rather to its point and possibility: there is no thinking without feeling. These are not separate non-interacting compartments of the mind, but mutually supporting, organically joined. It is perhaps conceivable that an organism might have insulated emotional and logical faculties, rather like the insulation (“encapsulation” in Fodor’s word) of perceptual from cognitive faculties, but it is not easy to imagine and is not the normal human (or animal) condition). We are logical-emotional beings not radically divided beings.

What are the sources of illogicality if not the emotions? We have accepted that emotions can sometimes derail logic, especially the emotions of hate, anger, jealousy, and envy. You would not think watching Star Trek that anything else could be the cause of logical error, but it isn’t hard to come up with other culprits. Being logical isn’t easy; it takes effort and training. Prejudice and laziness are obvious factors, along with inattention and simple distraction. It takes time to be logical, and peace and quiet, and these are sometimes in short supply. Some people are better at it than others (I mention no names). There is a certain hatred of logic, too, because of its power to fly in the face of wish fulfilment. So, there are plenty of reasons for the breakdown of the logical faculties—we don’t need to make a blanket appeal to the mere presence of emotion. Logic has many enemies in the human mind (and outside it). I suspect the focus on emotion stems from the puritan tradition—love and lust as the chief sources of logical malfunction. And it is true that Captain Kirk’s main area of weakness judgment-wise is his penchant for the ladies (often alien ladies): he is a notably romantic starship captain. Spock is often puzzled by his commander’s romantic follies and deplores their effect on his powers of logical reasoning. Still, there is more to emotion than romantic love, and many emotions are quite compatible with a firm adherence to logical principles. Spock’s emotional life revolves around his science and his chess; Kirk’s gravitates towards the winsome charms of alien beauties: both manage to be logical most of the time (even Spock has his moments of weakness, generally because of nefarious chemical infusions). What we may call “the Star Trek fallacy” is in both cases wrong: emotion and logic are not incompatible. [1]

[1] The original (and best) Star Trek series was produced in the late 1960s when there was a lot of discussion about how best to live (hippies etc.). The Beatles released “All You need Is Love”, an anthem for the period. We can only imagine Spock’s sharply raised Vulcan eyebrow. His motto would be “All You Need Is Logic”. Of course, the correct motto is “Love Logic and Logically Love”. You may cavil at the second conjunct, but all it means is that love should be guided by reality not by popular opinion or arbitrary whim. And not all love is of persons; ideas can be objects of love too.

Share

My Secret Garden

My Secret Garden

I live in South Miami, just outside Coral Gables. I have an extensive garden, with much tropical vegetation. My study has a door onto this garden; I go out there a fair amount. It has a jungle feel. In this garden I have a full-size competition-level trampoline shaded by trees. I also have an archery range, including a knife-throwing set-up. I have three slices of a large tree trunk that used to be in my garden but had to be cut down; they are my targets. As reported earlier, I throw knives (really spikes) lefthanded, though I am righthanded, specializing in the no-spin throw from varying distances. This activity interacts with my tennis (as well as drumming and guitar). But I just added another activity: shot-put. I now have a designated shot-put area where I throw (toss?) the steel ball lefthanded. My garden isn’t big enough to accommodate my discus throwing (lefthanded), so I need to take that activity to a local park, where I also throw lefthanded frisbee. I am now a “lefthanded man”, a new type of human: a righthanded lefthander. I am hoping and expecting that the shot-put discipline will help with my tennis; indeed, I think it already has. Is it for everybody? I won’t go that far (same for discus), but for me it is a valuable addition. Now my garden feels like a place I like to hang out in as my own personal sports arena. I don’t garden much but I do enjoy playing in my garden.[1]

[1] Let me add that I like to play guitar while I watch television. They are showing old episodes of the original Star Trek six nights a week, which hold up amazingly well, so I play guitar while watching Kirk and Spock. But I am now watching a lot of Olympic coverage and playing guitar then too; this means that I am playing guitar for many hours at a time. I was struck by how much this lengthy playing has improved my guitar technique. So, I suggest that guitar teachers recommend long hours of guitar playing in front of the television. It takes the boredom out of practice and it justifies watching too much television. It’s also quite pleasant.

Share

Four Ways of Studying Language

Four Ways of Studying Language

What is the linguist or philosopher of language studying? It will be useful to distinguish four different (but overlapping) areas of study: ontological, epistemological, behavioral, and phenomenological. By “ontological” I mean the study of language as a formal object: its nature, structure, and inner workings. What is it composed of? How is it structured? How does it work? Here we will find discussions of the nature of propositions, grammar, logical form, sense and reference, objects and facts, Platonism and anti-Platonism. We will find proposals about what meaning is and what words refer to (if anything). The subject is language itself not our psychological relation to it. The case is like the ontological study of mathematics—what it is in itself not how it relates to the human mind. By “epistemological” I mean (predictably) our knowledge of language—what is variously called linguistic competence, mastery, understanding, grasp. This is analogous to our knowledge of mathematics. This kind of study is clearly dependent on the first kind, since how something is known depends on what it is. What kind of knowledge is this and how is it possible? Third, we have behavioral studies—what is called linguistic performance, use, action, utterance. This will involve how linguistic knowledge is related to actual speech—the cognition-action nexus. Fourth, we have the phenomenological investigation of language—its expression in consciousness. This takes in all the ways in which language finds its way into our subjective experience: the felt character of language, its mode of seeming. Thus, we have what language constitutively is, our mode of knowing it, how it manifests itself in behavior, and what it is like to have it. These should not be confused or run together, though there will no doubt be plenty of overlap. The totality of these ways forms the subject matter of the study of language. We might also add the social psychology of language, or the politics of language, or its relation to human emotion; but these may be subsumed under the categories already listed, being concerned with the psychology of language generally. Epistemology is part of psychology, broadly construed—one department of the human mind, the knowing part. In practice, such non-epistemological psychological studies are seldom encountered in theoretical linguistics treatises.

I will make some remarks about the nature of linguistic knowledge, which seems to me in need of fresh insights. There is the much-debated question of whether linguistic knowledge is similar to other kinds of knowledge, in particular, knowledge-how and knowledge-that. When a speaker knows what a sentence means, does he or she have a true justified belief about that sentence, or is it more akin to an ability?  Is unconscious knowledge of the syntactic rules of a language a type of true justified belief? Is knowledge of language simply the application of a general faculty for knowledge to a particular subject matter, viz. language, or is it a special type of knowledge? Do we know language in the same sense in which we know geography and history? Then there is the question of how innate our knowledge of language is—completely, partially, or not at all. The question I want to focus on is whether linguistic knowledge is a priori, and if so how. The other questions have been widely debated, if not resolved, but this question has received little or no attention. It also strikes me as hard. I incline to the view that knowledge of language is special and sui generis; it is not just one application of a general knowledge faculty. That idea has already been cast into doubt by the existence of a priori knowledge, but linguistic knowledge puts a new wrinkle on it. Suppose I know what “Tom is bald” means: I know that the sentence in question means that Tom is bald. Is this knowledge a priori or a posteriori? My knowledge that Tom is bald is clearly a posteriori (I saw his bald head yesterday), but what about my knowledge of the meaning of the sentence? Well, how do I know its meaning? You might say I know it by learning certain empirical facts about the sounds or marks composing the sentence—facts of a conventional nature. That is no doubt true and to that extent my knowledge is a posteriori: but is it all there is to my knowledge of the sentence’s meaning? No, because I have to grasp how the sentence is put together, its grammatical structure. This involves knowledge of how nouns and verbs combine to produce sentences, i.e., form predicative propositions. So, do I know a posteriori how nouns and verbs work together to produce sentences? Have I seen this happen a number of times in the past and thereby infer that it will go on happening in the future—is it a case of empirical inductive knowledge? Apparently not: I know it without needing to undertake empirical investigations of this type. I know it a priori, just by knowing what nouns and verbs are and how their combination produces whole sentences; arguably, this depends on my knowing what reference and predication are and how they generate true propositions. That is, my knowledge of the meaning of the sentence “Tom is bald” incorporates an a priori component in addition to an a posteriori component. We could put this by saying that our knowledge of grammar is a priori, i.e., the combinatorial principles of sentence formation are known a priori.[1] When I know (empirically) the meaning of individual words, I know without further empirical input the meaning of phrases and sentences composed of those words. I know a priori that “Tom is bald” is grammatically correct and “Bald Tom is” is not, given knowledge of the conventional meaning of those words. Thus, linguistic knowledge has an a priori component, like geometrical and arithmetical knowledge. It isn’t entirely a posteriori. It may also be that this kind of knowledge is innate, as a matter of fact, but it is in any case an example of a priori knowledge. Is it knowledge of analytic truths? I won’t pronounce on this question, but that may also be so, pending an account of the scope of analytic truth and its epistemology. If we know what “noun” means and what “verb” means, then we know that noun-verb conjunctions produce sentences; there is an analytic necessity at work here.

We can now say something about the special character of linguistic knowledge (knowledge of meaning, in particular): it is a type of mixed knowledge. It has a base component that ranks as a priori, and it has a surface component that is a posteriori. When I speak and know that my words are meaningful, and know also whatthey mean, my knowledge has two components: an empirical component corresponding to certain contingent conventions of use, and an a priori component corresponding to the underlying grammar of language (a linguistic universal). As it were, I know the logic of my language a priori and I know its manifestation in speech a posteriori; my knowledge is a hybrid of disparate elements. Moreover, the a priori component is peculiar to language, not shared by other domains of a priori knowledge (geometry, arithmetic, ethics). It is knowledge of (universal) grammar. So, there is something unique about our knowledge of language; it has a sui generisinner complexity. In speaking and understanding, the mind is combining its a priori faculty and its a posteriorifaculty, to produce a special species of knowledge. Linguistic knowledge is a synthesis of these two types of knowledge. Thus, what is called linguistic competence is a synthesis of the a priori and the a posteriori. The work of Frege and the early Wittgenstein illustrates nicely how the element of a priori knowledge enters the picture (they didn’t think they were dealing in contingent empirical facts but rather in a priori necessary truths). Knowing a language isn’t just knowing empirical truths about that particular language.

Circling back to the four ways, we can draw some conclusions about the other three. First, the ontology of language: it must have an a priori structure, indeed a necessary structure, given that this structure is known a priori (just as Frege and early Wittgenstein maintained). Second, linguistic performance must stem from a dual competence: speech and understanding must be (partially) controlled by a priori and a posteriori knowledge working together. There is a kind of double causation at work (no doubt reflected in the relevant brain mechanisms). This is a far cry from old-fashioned stimulus-response psychology. Third, the phenomenology of linguistic activity will surely reflect its epistemic underpinnings; in particular, the phenomenology of a prioricognition will show up in consciousness. What it is like to speak and understand will bear the marks of this type of knowledge, as well as the more humdrum type of knowledge deployed in knowing sound-meaning associations; we will be conscious of the deep a priori knowledge involved in mastering a natural language grammar. In this respect, linguistic phenomenology belongs in the same camp as mathematical phenomenology (as well as ethical phenomenology, if we adopt a rationalist view of ethical knowledge). Perhaps this helps to explain the rather enigmatic and tantalizing nature of linguistic knowledge: it partakes of the puzzle of the a priori, going back to Plato and shaping the rationalist tradition in philosophy. Knowledge of language is not the most pellucid of things, which is why not much is said about it beyond the superficial.[2]

[1] I mean here the basic principles of universal grammar, not the idiosyncratic rules of particular languages, e.g., adjective placement or whether it is “good grammar” to split one’s infinitives. These latter are a posteriori.

[2] Knowledge of mathematics is notoriously problematic, as is knowledge of ethics. Knowledge of language, likewise, incorporates a problematic a priori component, and so inherits that source of puzzlement. How exactly do we come to know that nouns and verbs combine to form whole sentences? It isn’t like knowing that fish and chips go nicely together. It certainly doesn’t seem to be based on sensory experience, or to have a causal basis. It is, in some sense, “intuitive”. Rationalist epistemology may be true (I think it is) but it isn’t free of mystery.

Share