A (Really) Brief History of Knowledge

A (Really) Brief History of Knowledge

This is a big subject—a long story—but I will keep it short, brevity being the soul of wisdom. We all know those books about the history of this or that area of human knowledge: physics, astronomy, mathematics, psychology (not so much biology). They are quite engaging, partly because they show the progress of knowledge—obstacles overcome, discoveries made. But they only cover the most recent chapters of the whole history of knowledge—human recorded history. Before that, there stretches a vast history of knowledge, human and animal. Knowledge has evolved over eons, from the primitive to the sophisticated. It would be nice to have a story of the origins and phases of knowledge, analogous to the evolutionary history of other animal traits: when it first appeared and to whom, how it evolved over time, what the mechanisms were, what its phenotypes are. It would be good to have an evolutionary epistemic science. This would be like cognitive science—a mixture of psychology, biology, neuroscience, philosophy, and the various branches of knowledge. It need not focus on human knowledge but could take in the knowledge possessed by other species; there could be an epistemic science of the squirrel, for example. One of the tasks of this nascent science would be the ordering of the various types of knowledge in time—what preceded what. In particular, what was the nature of the very first form of knowledge—the most primitive type of knowledge. For that is likely to shape all later elaborations. We will approach these questions in a Darwinian spirit, regarding animal knowledge as a biological adaptation descended from earlier adaptations. As species and traits of species evolve from earlier species and traits, so knowledge evolves from earlier knowledge, forming a more or less smooth progression (no saltation). Yet we must respect differences—the classic problem of all evolutionary science. We can’t suppose that all knowledge was created simultaneously, or that each type of knowledge arose independently. And we must be prepared to accept that the origins of later knowledge lie in humble beginnings quite far removed from their eventual forms (like bacteria and butterflies). The following question therefore assumes fundamental importance: what was the first type of knowledge to exist on planet Earth?

I believe that pain was the first form of consciousness to exist.[1] I won’t repeat my reasons for saying this; I take it that it is prima facie plausible, given the function of pain, namely to warn of damage and danger. Pain is a marvelous aid to survival (the “survival of the painiest”). Then it is a short step to the thesis that the most primitive form of knowledge involves pain, either intrinsically or as a consequence. We can either suppose that pain itself is a type of knowledge (of harm to the body or impending harm) or that the organism will necessarily know it is in pain when it is (how could it not know?). Actually, I think the first claim is quite compelling: pain is a way of knowing relevant facts about the body without looking or otherwise sensing them—to feel pain is to have this kind of primordial knowledge. To experience pain is to apprehend a bodily condition—and in a highly motivating way. In feeling pain your body knows it is in trouble. It is perceiving bodily harm. Somehow the organism then came to have an extra piece of knowledge, namely that it has the first piece, the sensation itself. It knows a mode of knowing. Pain is thus inherently epistemic—though not at this early stage in the way later knowledge came to exist. Call it proto-knowledge if you feel queasy about applying the modern concept. We can leave the niceties aside; the point is that the first knowledge was inextricably bound up with the sensation of pain, which itself no doubt evolved further refinements and types. Assuming this, we have an important clue to the history of knowledge as a biological phenomenon: knowledge in all its forms grew from pain knowledge; it has pain knowledge in its DNA, literally. Pain is the most basic way that organisms know the world—it is known as painful. Later, we may suppose, pleasure came on the scene, perhaps as a modification of pain, so that knowledge now had some pleasure mixed in with it; knowledge came to have a pain-pleasure axis. Both pain and pleasure are associated with knowledge, it having evolved from these primitive sensations. This is long ago, but the evolutionary past has a way of clinging on over time. Bacterial Adam and Eve knew pain and pleasure
(in that order), and we still sense the connection. Knowledge can hurt, but it can also produce pleasure.

Notice that the external world has not yet come into the picture. There is as yet no knowledge of material objects in space, so the first knowledge precedes this kind of knowledge (subjective knowledge precedes objective knowledge). But it is reasonable to suppose that the next big stage in the onward march of knowledge—the age of the dinosaurs, so to speak—involves knowledge of space, time, and material bodies (the “Stone Age”). I mean practical knowledge not advanced theoretical knowledge—knowing-how, as we now describe it. The organism knows how to get about without banging into things and making a mess. We could call this “substance knowledge”. How pain knowledge led to this type of knowledge we don’t know; what we do know is that it marked a major advance in the power of knowledge, because it introduced the subject-object split. Now knowledge has polarity built into it: here the state of knowing, there the thing known. In the pain phase such a division did not exist in res, but when external bodies came to be known knowledge distinguished itself from the thing known. That is, perception of the external world involves a subject-object split. Distant things are seen and heard. This division was already present in plants as they orient themselves to external objects—the sun, water, the earth. But they don’t know these things, though it is as if they do; it took pain (and pleasure) to convert this kind of directedness into knowledge proper. If trees felt pain, they might well be perceiving subjects, given their tropisms and orienting behavior. So, let’s declare the age of sense perception the second great phase in the development of knowledge on planet Earth. The two types of knowledge will be connected, because sensed objects are sources of pain and pleasure: it’s good to know about external objects because they are the things that occasion pain or pleasure, and hence aid survival.

I will now speed up the narrative, as promised. Next on the scene we will have knowledge of motion (hence space and time), knowledge of other organisms and their behavior (hence their psychology), followed by knowledge of right and wrong, knowledge of beauty, scientific knowledge of various kinds, social and political knowledge, and philosophical knowledge. Eventually we will have the technology of knowledge: books, libraries, education, computers, artificial intelligence. All this grows from a tiny seed long ago swimming in a vast ocean: the sensation of pain. From “Ouch!” to “Eureka!”. We go to universities because our distant ancestors felt pricks and pangs: one sort of knowledge led to the other after a brief period of time (by cosmic standards). A super-scientist might have seen it coming (“It won’t be long before they have advanced degrees and diplomas”). The point I want to stress is that this is a natural evolutionary process, governed by the usual laws of evolution–cumulative, progressive, opportunistic, gradual. As species evolve from other species by small alterations, so it is with the evolution of knowledge; there is no simultaneous independent creation of all the species of knowledge. Knowledge-how, acquaintance knowledge, propositional knowledge, the a priori and the a posteriori, knowledge of fact and knowledge of value, science and common sense—all this stems from the same distant root (though no doubt supplemented). It was pain that got the ball rolling, and maybe nothing else would have (pain really marks a watershed in the evolution of life on Earth). Knowledge of language came very late in the game and is not be regarded as fundamental. Epistemology is much broader than language. Knowledge has all the variety and complexity we expect from life forms with a long evolutionary history. Quite a bit of the anatomy of advanced organisms is devoted to epistemic aims–the eyes, the ears, the nose, the sense of touch, memory, thought, and so on. Knowledge is not a negligible adaptation. Yet it must have comparatively simple origins. It didn’t arise when a human woke up one bright morning and felt a love of wisdom in his bosom. It arose from primitive swampy creatures trying to survive another day.

I will make one further point: knowledge, like life in general, is a struggle with obstacles. Survival isn’t easy, and nor is knowledge. In both there are obstacles to be overcome, resistance and recalcitrance to be fought, battles to win or lose. Knowledge is hard: you know it don’t come easy. It’s a difficult task. Those books about the history of science draw this lesson repeatedly—it wasn’t easy to figure out the structure of the solar system or the laws of genetics. But that is part of the very nature of knowledge as an evolved capacity—the struggle to be informed. The organism needs to know if it is in danger, so pain came along; we would like to know whether the Earth is the center of the universe, so astronomy was invented. Knowing is the overcoming of obstacles, like the rest of evolved life. Knowledge was born in pain and struggle. It is not for the fainthearted. This is epistemology naturalized.[2]

[1] See my “Consciousness and Evolution”, “The Cruel Gene”, “Pain and Unintelligent Design”, and “Evolution of Pain”.

[2] Quine talked about epistemology naturalized, eschewing (his word) traditional epistemology. I am not eschewing anything; I am adding not subtracting. I want to acknowledge the biological roots of knowledge, finding knowledge in nature (it’s not about schools and examinations). Books are recent accessories. The very first knowledge is an organism feeling pain for the first time: it hurts but at least it gains valuable information. Eventually, organisms grow to love knowledge—we become scholars of reality. The pain is a distant memory. Still, if you read the book of knowledge (chapter 3 of the Book of Life), you find a footnote to primordial pain.

Share

Knowledge and Time

Knowledge and Time

I shall make some remarks about a topic neglected by epistemologists—the relationship between knowledge and time, particularly future time. The relationship is not simple or easily grasped; there is a reason for the neglect. I will try to keep it as uncontroversial as possible; this is to be preliminary groundwork. Truisms not breakthroughs. The big question is this: Can we know, perceive, refer to, or have justified beliefs about the past, the present, and the future? I think it would be generally agreed that we can and that we have these relations to the past and the present—but to the future not so much. There might be some argument about whether we perceive the past, or even whether we perceive the present: with what sense do I perceive what happened yesterday, and what about the time lag between the event perceived and the perception of it? Don’t we infer the past from our present memories of it, and don’t we really perceive an event in the past given the time it takes for a perception to be formed? But in the case of the future there is really no doubt that we don’t perceive it; the question has been whether we can still know it. For perception requires causation and causation never runs from the future to past: what happens tomorrow cannot cause what happens today, in the mind or elsewhere. This familiar point is surely correct and scarcely disputable, but it needs to be fully absorbed: we cannot be acquainted with the future; we cannot directly apprehend it; we cannot be consciously aware of it. We cannot know it in the perceptual sense; we can only know it, if at all, by inference. It is necessarily imperceptible. In the case of the past, we can know it directly (by memory of past perceptual encounters) or by inference from these, but we can never know the future in this direct way—because we can never perceive it. You can’t now see what will happen tomorrow, no matter how much you strain your eyes—light doesn’t travel into them from the future. So, this basic source of knowledge is completely unavailable to us, in principle and forever. Given that perceptual knowledge is the basis of all knowledge, the question must then arise as to whether we can know anything about the future. Isn’t it just too cut off to be known? Aren’t we limited to mere guesswork, chance truth, accidental match? And these are not instances of knowledge. Shouldn’t we be absolute agnostics about the future? What you can never see you can never know. The case is even worse than other minds, because at least in that case we have causal relations between object and subject: the other’s mental states cause my mental states via his behavior, which I see with my own eyes. But the states of the future can never cause mental states in me, or any other present states. The past is not an effect of the future, as the future is an effect of the past. Thus, I cannot know the future by perceiving it, or any part or sliver of it, however indirectly or remotely. I am completely shut off from it. We are separated by an epistemological wall, based on a metaphysical necessity.

But it doesn’t follow that we can’t have true justified beliefs about the future: so, can we? Let me first note that if we can it won’t follow that we can have knowledge of the future (knowing-that). It would follow only if knowledge is, or can be, true justified belief; but the future provides a clear counterexample to that analysis. Suppose I have a true justified belief that war will break out tomorrow: do I thereby know it will? Intuitively, no: I don’t know this fact, I only believe it.[1] I think it will and for good reason, but I don’t really know it—not like I know that there’s a cat in front of me. I am not acquainted with that future war. We would be perfectly within our rights to deny that anything about the future is ever known, even if we allow that we can have reasonable true beliefs about the future. And indeed, I venture to suggest that this is the common opinion: the future is not knowable—though it is conjecturable. You can have beliefs about it, but these don’t amount to cases of knowledge. So, the JTB analysis of knowledge is insufficient (this is a kind of Gettier case, in effect). But we can still ask whether such beliefs are ever justified, discounting the knowledge question. Now this is extremely well-trodden territory, which I don’t intend to re-tread. I will make two points about it. First, this is not the “problem of induction”: that problem is not inherently about the future; it can apply to both the past and the present. Were all past swans white and are all present swans white? The problem of induction is about generalizing from a sample to a whole population, not about inferring the future from the past. The second point is that induction is the only way we can know about the future, since we are perceptually closed to the future. We can perceive the past (perhaps we always do) and we can perceive the present (pace the time-lag argument), but we cannot as a matter of necessity perceive the future. This puts belief about the future in a much worse position than belief about the past and present, since we don’t know even what it would be to perceive the future. What would it even feel like? If an alien had such a sense, would we be able to grasp its phenomenology? On top of that we have the problem of induction itself, which strikes even regular people as problematic. True, we reflexively form expectations for the future (as Hume famously observed), but this has nothing to do with reason; we would have such reflexes whether the future resembles the past or not. Induction is notoriously difficult to justify. At best future predictions are perilous and indemonstrable. You don’t have to be a skeptic to feel that we are deeply ignorant of the future (you have never been there); indeed, this is hardly worth calling skepticism, since ordinary folk are already queasy with talk of justification and knowledge regarding the future (this is not so for the past and present). In sum: we have neither perceptual knowledge of the future nor solid justification of beliefs about the future—just instinct, conditioning, and blind faith. This is why there is no history of the future—no narrative of what will happen. One might be forgiven for supposing that the future is not a fit object of human knowledge; we just talk as if it is for pragmatic reasons. Strictly, we shouldn’t even have beliefs about the future, since belief presupposes justification; we should only have attitudes of surmise and speculation (good Popperians about what will be). At any rate, that is a position with an intelligible rationale.

This problem has always haunted science, because science purports to be predictive—and yet empirically warranted. Empiricism bases knowledge on experience, but we don’t experience the future; it ought then to be out of epistemic bounds. I have no “impression” of the future events I predict, so how can I know about them? How then can an adequate philosophy of science be empirical? Popper took a radical line; others have suggested that science doesn’t make factual predictions but is only a useful tool for getting along in the world. But it is always future-oriented and hence open to criticism from a consistent empiricist. History has no such problem—or logic and mathematics and philosophy. The epistemology of science has therefore always been under a cloud, as exceeding what can be humanly known. Hume was well aware of this (Popper made a big deal of it). Proust wrote a long book called Remembrance of Things Past but not Expectations of Things Future, because there is so little to say under the latter head; there is no madeleine of the future. This is our human epistemic predicament and the source of much of our anxiety (it isn’t only death). We can describe the future and fear it, but we can’t know it—not really. We can know (sic) the future only by using the past in conjunction with induction, but induction is eminently questionable, so we are in perpetual doubt about the future. The future itself is terra incognita. At best we go on external signs of it.[2]

[1] See my “Perceptual Knowledge” and “Non-Perceptual Knowledge”.

[2] The idea of the crystal ball is instructive: the only way the future can be genuinely known is by being seen in the shape of a transparent sphere—a portal to the future. Time travel is similar (and equally fictional): we can know the future only by going there and clapping our eyes on it—up close, directly, under our nose. Pure fantasy, of course, but it feeds off our epistemic anxiety concerning the future: the future is the unknown in its purest form, outdistancing even the most remote galaxy or secretive mind. We can “know” it only by comparing it with the past—its very opposite. How could the past ever tell us about the future? Time has no patience with our intellectual limitations. The future is the twilight zone but without any light. That is the terrible truth.

Share

An Answer to the Skeptic

An Answer to the Skeptic

Skepticism gains traction from the true justified belief theory of knowledge, because it can be argued that our beliefs are seldom if ever justified. But that is just one theory, not a datum. What if we adopt another type of theory? I observe, to begin with, that other types of knowledge than knowing-that are not subject to skeptical argument: knowing-how and knowledge by acquaintance. You can’t use a brain-in-a-vat scenario to undermine the claim that we have knowledge-how and knowledge by acquaintance, because these are far removed from what is called propositional knowledge (Russell’s knowledge-by-description). I have knowledge-how if I have a certain ability, whether I can justify the claim that I have this ability or not, even while I am a brain in a vat and have never done the thing in question, e.g., throw a ball. I can also know by acquaintance what red is without knowing whether there is an external world; it just depends on what I have experienced not what beliefs I can justify. These are not evidence-based types of knowledge, so the quality of the evidence cannot be impugned. So, the concept of knowledge is not inherently susceptible to skeptical challenge. But what about knowledge-that?

Suppose we go for a perception-based theory of knowledge not a justified true belief theory: that is, we extend acquaintance to knowledge of facts.[1] For example, I might take myself to know that I am lying in bed, and suppose I am: do I really know that I am? That depends on whether I perceive that I am lying in bed, which doesn’t follow from taking myself to be and this actually being the case. Suppose I do perceive this (the fact of my being in bed causes me to have the experience); then we say I am acquainted with this fact—whether I can justify the belief or not. My knowledge depends only on the facts not on my ability to have true justified beliefs about them. Even if I can’t rule out the hypothesis that I am a brain in a vat, I am still causally connected to the fact in question, if indeed it is a fact. Thus, I can know this fact independently of my ability to justify my beliefs about it: for my knowledge is not based on any such justification; it just arises from my being in a perceptual relation to the fact. Knowing facts by acquaintance (perceiving them) isn’t susceptible to the standard skeptical argument. But if that’s what knowledge is, then we have defeated the skeptic about knowledge: the knowledge exists—whether we can know this or not (we are not discussing second-order knowledge). Perceptual knowledge does not depend on possessing justified beliefs. The skeptic has no argument against the possibility of first-order knowledge of facts based on direct acquaintance.

This point may be conceded, but what about all the putative knowledge that is not based on acquaintance but on inference? What about belief-based knowledge based on evidence and justification? Surely the skeptic can get his fangs into that! Here we might agree but insist that no sensible person has ever supposed otherwise: of course, we can’t know what goes beyond our direct apprehension of fact; we can only surmise. It is a misuse of the concept of knowledge to suppose otherwise.[2] Perhaps we can stretch a point and agree that we might call such belief “knowledge” in a relaxed frame of mind, but it was never really knowledge, as distinct from reasonable speculation, just loose talk for pragmatic purposes. I don’t know that atoms exist, though I might have reason to believe that they do; I only truly know what I myself directly perceive. If so, it was never part of (sensible) common sense to apply the concept of knowledge beyond its proper domain, so the skeptic is tilting at windmills and parading truisms. I know hugely many facts about the external world just by perceiving it, even though there are many things I reasonably believe that don’t count as knowledge. Isn’t that what we normally suppose?

Here the skeptic may retract his horns: okay, he says, but we still don’t have adequate justification for the beliefs we hold, despite the fact that we have a lot of knowledge by acquaintance. To that weakened form of skepticism we can reply as follows. We can simply agree with the skeptic but point out that he has said nothing to rule out knowledge in the areas proper to it; he is talking about something else entirely, i.e., justified belief. But second, we could recommend a comparative notion of justification combined with some absolute cases of it. Thus, I am fully justified in believing I am in pain and I am more justified in believing that eagles fly than that pigs fly. We need not claim that all justifications are created equal in order to rebut the justification skeptic—that would be absurd. Justification comes in degrees of cogency and not all measure up to the perfect case—whoever denied it? So, the justification skeptic has not raised a startling new epistemological challenge that undermines commonsense epistemology. We can all agree that our justifications are pretty shabby, judged objectively, but still maintain that the concept of justification is in good order with useful applications. What we are not going to agree on is that there is no such thing as knowledge of the external world; and the correct concept of knowledge does not invite any such conclusion. So, the skeptic has left commonsense epistemology more or less where it was, not counting those rash epistemological optimists who ought to know better. Sound minds have always known that human knowledge is a more limited affair than has sometimes been advertised; that is not skepticism but realism. Human knowledge: its scope and limits.

In case you think this kind of anti-skepticism is toothless, let me note its consequences for knowledge of other minds and the past. For we can now be said to know facts about other minds and the past: that is, such facts can act as the cause of our perceptual states. I know you are in pain because your pain has caused behavior that I perceive as pain-expressing—that is a fact. I can’t justify my belief that you have a mind to the skeptic’s satisfaction (and not unreasonably), but that doesn’t prevent me from being in a knowledge relation to the fact in question. Similarly, past facts cause current memories, so I know them by something akin to perception (it might even be perception)—even if I can’t justify my belief that there is a past. Thus, I can know facts about other minds and the past by something like perceptual acquaintance, though (arguably) I can’t justify my beliefs about these things. Knowing facts is one thing, justifying beliefs is another. To put it simply, if knowing is seeing, then I know a great many things; what beliefs I can justify is another matter, and may well be shakier than some people have supposed. Since no genuine knowledge is constituted by true justified belief—that is just an incorrect analysis—it is irrelevant to knowledge if adequate justifications for belief are unforthcoming.[3]

[1] See Michael Ayers, Knowing and Seeing (2019); also, my “Perceptual Knowledge”.

[2] See my “Non-Perceptual Knowledge”.

[3] A virtue of the account given here is that it concedes some territory to the skeptic—he isn’t just barking up the wrong tree—but it doesn’t concede his most radical claim, namely that nothing is known about the external world (or other minds and the past). Our alleged justifications don’t really warrant the kind of strong belief we are apt to derive from them, but that has nothing to do with our ability to have knowledge; and indeed, we have a lot of that. Knowledge proper was never about warranted belief. Human knowledge, like animal knowledge, is in good shape, though quite restricted; belief on the other hand cries out for justification and often falls short of it. That’s why some philosophers (e.g., Popper) dispense with it in serious contexts.

Share

Augustine and Wittgenstein

Augustine and Wittgenstein

Wittgenstein opens the Philosophical Investigations with a quotation from Saint Augustine (in Latin). He then comments: “These words, it seems to me, give us a particular picture of the essence of human language. It is this: the individual words in language name objects—sentences are combinations of such names. In this picture of language we find the roots of the following idea: Every word has a meaning. This meaning is correlated with the word. It is the object for which the word stands”. In other words, Augustine’s statement implies or presupposes that the theory of language proposed in the Tractatus is correct. This seems to me a tremendous overreach and not at all what Augustine had in mind; it is a flimsy interpretation at best. Augustine says, “When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out… Thus, as I heard words repeatedly used in their proper places in sentences, I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.” Wittgenstein comments: “Augustine does not speak of there being any difference between kinds of words. If you describe the learning of language in this way you are, I believe, thinking primarily of nouns like “table”, “chair”, “bread”, and of people’s names, and only secondarily of the names of certain actions and properties; and of the remaining kinds of words as something that will take care of itself”.

What are we to make of this? It seems wrong in every way. First, notice that Wittgenstein himself speaks of names of actions and properties, so he is not denying that it is proper to speak this way—that verbs and adjectives are names. He accepts that nouns, verbs, and adjectives are all names! Not names of objects, presumably, whatever objects are, but still names of something—actions and properties. This is what such names denote—events and attributes, to vary the terminology. So, was Augustine’s error to suppose that all names are names of objects? But surely Augustine was assuming no such thing; he was just considering names of concrete objects. He would have accepted instantly that not all words name or denote objects—some words name or denote entities of other kinds. He is simply describing how he learned the meanings of names of objects, not words generally. Wittgenstein is foisting onto him a manifestly false doctrine—the one he invented and defended in the Tractatus. Augustine knew perfectly well that language contains other kinds of word, as we all do. He didn’t even suppose that names of objects are primary in language; he simply didn’t talk about other words. Maybe they are primary, maybe not; in any case, he was not somehow ignorant of other words. Did he think that the rest of language would “take care of itself”? I see no evidence of that; he was restricting himself to the learning of names of objects and what he recollected of that. And did Wittgenstein himself regard other words as names, as verbs and adjective are said to be? Not names of objects, to be sure, but names of (say) functions or relations or concepts. He also runs together several different semantic notions: having a meaning, meanings as correlated with words, meanings as the objects for which words stand. The first of these sounds like a truism, the second perfectly arguable, the third clearly false. Augustine never asserts or presupposes the last, but he would no doubt subscribe to the first.

But are all words names? What is a name? The OED gives “a word or set of words by which someone or something is known, addressed, or referred to”. By this definition it is not too outlandish to describe all words as names: the word by which conjunction is known or referred to in Italian is “e”—that is the word forconjunction. It is the verbal sign or symbol that Italians use to refer to conjunction, i.e., to express or denote or signify conjunction. Similarly for verbs and adjectives. Not much is packed into the word “name”. It is like “denote”: “be a sign of; indicate—stand as a name or symbol for” (OED). No grotesque error is contained in these ordinary words; they are just vernacular expressions. Using them doesn’t commit us to regarding every word as standing for an object, though it stands for something—a person, an animal, a chemical substance, a number, a theory, an action, a truth function, etc. No heavy-duty ontology is thereby introduced. We are not led astray by these familiar locutions into crazy metaphysical views. Augustine didn’t somehow think that everything we talk about is just like a boulder or a dog or a city—any more than Wittgenstein thought that actions are like body parts when he spoke of them as names. We are not all proto-Tractarians as children and adults. Wittgenstein is using Augustine as an exemplar of a mistaken way of thinking about language that we are all prone to, and which he will go on to combat. But he is quite wrong about the import of Augustine’s words and about what we normally pre-theoretically believe. So, the book gets off on the wrong foot from the very beginning, and is argumentatively shoddy.

Share

Space, Time, and Logic

Space, Time, and Logic

Philosophy needs a metaphysical vision. Humbly (and pretentiously), I will provide one. As far as I know, it has no predecessor, though echoes of other theories will be apparent. Neither does it have a name: it might be called “Logical Spatio-Temporalism” (LST), or just “Spatialism” because space is central to it. It isn’t idealism or materialism, since mind and matter don’t figure in the foundations. The elements of it are space, time, and logic: this is what reality is fundamentally made of. Matter, mind, and mathematics are derivative from these three elements; I would even say they cause them. I take it that matter is implicit in space not just annexed to it or superadded. This is a modern viewpoint, though it admits of various elaborations. Matter is a modification of space, a version of it. I favor the idea of a space-matter continuum; there is an underlying unity here (as there is said to be a unity called space-time). We could say that space became matter as we know it; it was already matter in some other form (no particles of the familiar types). Space is matter before it has been cooked, so to speak. Matter, as it exists now, evolved from space. To space we must add time—the medium of change. Without time matter is static and unchanging, unrecognizable as matter. Time renders space dynamic. The modern physicist will insist that space and time are not separable, and if you are happy with that curious identification, you are welcome to adopt it; then the theory will have space-time as one element of it. Could time exist without space? Let’s not even go there; say what you like, you can still share my metaphysical vision. The third element is the novel one: logic. I don’t mean predicate calculus or modal logic or any other symbolic scheme; I mean necessary consequence of the kind we call logical (and let’s not go into that question either). Heuristically, think of Frege’s abstract realm of “Thoughts”—the subject matter of reasoning. We adjoin this to space and time in order to capture the realm beloved of the rationalists—logic, mathematics, philosophy itself. The picture, then, is that mathematics, as it now exists in human civilization, is an outgrowth of logic (possibly combined with space in the field of geometry). Together, these three elements constitute the foundations.

The vision is that space, time, and logic precede mind, matter, and mathematics. These are relative latecomers in the construction of the universe as it presently exists. Everything that now exists owes its being to these three things; we might even say that the present universe is supervenient on space, time, and logic—determined by them. They cause everything. After God created them, he could slope off and take a nap; his work was done. Alternatively, the ontological structure of the universe has them at the bottom holding up the superstructure. They are bedrock. Mind and matter are mere side-effects, not foundational at all. Any universe like ours will have them as its infrastructure. But this is not to say that every possible universe is so structured: for it may be that space and time can vary across logical space. Maybe in some possible worlds space has a different geometry—of 27 dimensions or infinitely many. Maybe time loops and curls somewhere in logical space. Logic, however, remains the same, being metaphysically necessary. These universes will look nothing like ours and may have nothing corresponding to our mind and matter. I am not legislating across all possible universes. But in our universe our space and time call the shots; they determine what will be or not be. These are the metaphysical elements that fix the reality of this world. The fundamental layer consists of space, time, and logic (or space-time and logic). Logic is the province of the a priori; space and time are the progenitors of the a posteriori.

Notice, however, that the scheme is metaphysical not epistemological; not a trace of the epistemological shows up in this metaphysical system. It is strictly by the book. This is reality as it exists completely independently of all or any knowledge. Our knowledge results from this reality; it doesn’t bring this reality into being. Not space, not time, not logic (shades of Frege). How much we know of this reality remains to be determined; it could all be completely unknowable by us. What we mean by space (our conception of it) might be nothing like space as it exists objectively, and similarly for time and logic. My own bet is that the gap is surprisingly large, but that is a separate question. We can at least responsibly surmise that reality is structured in the tripartite way described. Reality has the architecture (to use a fashionable term) I have conjectured: here space, there time, yonder logic. It has three basic ingredients (like bread—flour, water, and yeast). If you want to bake a universe, these are the ingredients you need—assuming you want a universe like ours (if you cut out the yeast, you end up with something pretty flat).

It will be observed that this is a minimalist theory; it tries to cut everything down to the minimum number of elements necessary. This is desirable because we don’t want to populate the universe with too many basic features, or else we won’t know how it exists. Yet it does contain one of two elements beyond what some systems envisage (the ones we call monistic—idealism, materialism). We must strike the right balance between profligacy and miserliness. Occam’s razor must not cut too deep. It really does seem to me that the three elements I have identified are genuinely distinct and individually necessary; whether they are sufficient is the mooter point. Some may urge that we need an extra ingredient if we are going to get all that we need—call that ingredient “God”. The itch that prompts this urge is certainly real, but we do better to live with the itch than succumb to superstition and quack cures. In any case, STL eschews such expedients and adventures. It carries a light backpack.[1]

[1] The thing with metaphysical visions is that they are best presented pithily and pitilessly, so they can penetrate the carapace of prejudice that seeks to repel them. Then the reader can contemplate them at his or her leisure and not be swamped by detail and qualification.

Share

Nabokov and Music

Nabokov and Music

It is well known that Nabokov didn’t like music: “Music, I regret to say, affects me merely as an arbitrary succession of more or less irritating sounds.” But he doesn’t say why. The affliction didn’t run in his family and his son was an opera singer. Moreover, as has been remarked, his prose is quite musical, as if he loves the music of words (the opening paragraph of Lolita is a case in point). It would be interesting to know if he disliked some kinds of music more than other kinds. Did he dislike orchestral music more than vocal, the latter being more verbal? Would he dislike a good story told a cappella, say the story of Lolita? Did he dislike some pitches more than others? Would he have liked rap because less melodic (he liked poetry)? Did he dislike percussion as much as woodwind? Did he like to dance? Did he appreciate Buddy Rich? Most people dislike some music, so was he on a dislike spectrum? Did birdsong irritate him?

I have a theory and it might apply to more people than Nabokov. He didn’t like musical sounds unconnected to meaning. In prose, especially spoken prose, sounds are connected to meaning (he had quite a musical way of reading his own words aloud); but in music the sounds are loosely connected to meaning, if at all. He liked sound and meaning combined but not sounds alone. There had to be a meaning that the sounds served. This theory predicts that he wouldn’t have hated a sing-song way of reading prose (so long as it was good prose). Some poetry reading is like this. It also predicts that he would dislike the sounds of a language if he didn’t know the language. If he was so focused on meaning, did he dislike all meaningless sounds, like a waterfall or a cow mooing? There is no evidence of that. His affliction seems quite puzzling. He could have been indifferent to music as an art form without finding it “irritating”, as most of us are indifferent to many sounds. Did he like to watch a ballet performance? Was he exaggerating for effect? He is an aesthete who loves the music of language but dislikes the art of music.

Share

Bernard Williams and Me

Bernard Williams and Me

One day, over twenty years ago, I ran into Bernard Williams in the corridor at NYU. He remarked: “The thing about you, Colin, is that you think you’ve either solved the problems of philosophy or they can’t be solved at all”. I paused for less than a second and replied, “I believe you’re right”. I assume his point was that this is a rather self-confident attitude, perhaps not entirely justified by the facts. But I think, on mature reflection, that it was perfectly reasonable, and not for “narcissistic” reasons. In the case of the mind-body problem, I had at that time been thinking about it for over thirty years and was well-versed in all the standard theories, as would be any half-way competent philosopher; and I had no idea what such a solution would look like. I also knew personally all the top philosophers of the time, and they had no idea either (though some may have thought they did). It was phenomenally unlikely that I, or any of them, would come up with the correct theory any time soon, and there were principled reasons for urging pessimism. It is perfectly rational to believe that no one living will come up with the solution. Is it rational to believe that someone not now living will come with it? But what will they have that we don’t? On the other hand, there are philosophical insights that have been gained in recent times, and I have as much access to them as anyone else; so, what I believe about the relevant questions is likely to be correct, or at least eminently defensible. Bernard was wrong if he thought that I mistakenly believed myself to have come up with these insights—that is demonstrably false. But I share them, like numerous others. The essential point is that no one I know (including myself) is anywhere near solving the mind-body problem, so it is not absurd for me to hold that the problem is not within sight of a solution. It is not that there is anyone X such that X can be counted on to solve the problem. Even the great Saul Kripke, who might be thought a plausible value of “X”, declared the problem “wide open and extremely confusing”. So, Bernard was quite right about my attitude, but it wasn’t all that silly. It isn’t as if Saul qualified his remark by saying, “But I hear Colin McGinn is working on the problem, so perhaps we will get a solution in a week or two”. That would be ridiculous. In this we see the true nature of philosophical problems. It isn’t like Watson and Crick and DNA or the Higgs boson or Darwin’s theory of evolution.

Share

Games and Meaning

Games and Meaning

Imagine a philosopher, call him LW for short, with a lifelong interest in games. In his youth he writes a book called The Logical Structure of Games. As the name suggests, the book gives an analysis of the formal structure of games—a theory of the a priori essence of games which purports to provide necessary and sufficient conditions for being a game. He also writes a book in middle age that largely repudiates his earlier book called The Activity of Playing Games. This book focuses not so much on logical structure as on practical function—playing the game as a human activity. These books may be summarized as follows.

In the first book the author announces that human life is the totality of human activities not human possessions—deeds not things. He then tells us that some of these activities mirror other activities; they resemble them. These we call “games”. In fact, he claims, games picture non-games—stand for them, are isomorphic with them. They represent non-games by sharing their structure. A game is then defined as a kind of picture (ludic picture) of a non-game, a surrogate or substitute or model. For example, many games represent military actions: one team pits itself against another, striving to win in vigorous intelligent maneuvers. The team aims for victory and exerts force against the other in order to achieve this aim, as in football and rugby. People get hurt, but are seldom killed. This kind of rough and tumble is good preparation for actual military confrontation (“war games”). LW focuses on the structure of games and their formal likeness to the activities they represent: the multiplicity of elements, the formal arrangement, the temporal sequencing. His theorizing is geometrical in character. Anything that looks like a game but doesn’t fit the theory is declared a pseudo game, for it resembles no non-game activity we can think of. Further examples include board games like chess and card games like poker. These are said to picture non-game strategic planning and economic activity; and indeed, money may be lost or gained in playing them. Then too we have mating and courtship games, which are taken to model actual mating and courtship; these are said to include athletic and dancing games. The athlete advertises his physical prowess; the dancer succeeds in getting into an embrace with his desired partner (dancing itself is alleged to be isomorphic with sexual intercourse). Then there is boxing and tennis, resembling hand to hand physical combat, actual fighting. Monopoly obviously stands for property transactions and the like. In this way LW hopes to persuade his reader that the essence of games is picturing; and if that is not evident on the surface, it can be revealed by in-depth logical analysis.

In his second book LW adopts a very different approach. No longer does he defend a logical picture view of games; indeed, he denies that games have any unifying essence. Instead, he declares that what we call games are united by nothing more than a loose family resemblance. The concept of a game is indefinable. Games are connected to our “form of life” and are held to be examples of rule-following. Rule-following in games is a practice, a custom, an institution. We cannot understand rule-following in games as an inner mental process or a brain state or even a disposition to behavior; it is a community activity. This is LW’s skeptical solution to a skeptical paradox to the effect that there is nothing (no fact) in game rule-following that this alleged activity could consist in; therefore, games do not exist. Here there is some doubt about the correct interpretation of LW’s words, but it is clearly the opposite of the earlier work. Interestingly, he compares playing a game to speaking a language; he tells us that playing a game has all the irreducible variety of speaking a language. There are many kinds of speech act with nothing in common; and the same is true of games, he suggests. He thinks it is relatively easy to see that language has no essence; this provides a nice parallel to his theory of games—they too lack an essence. The concept of a game is as much a family resemblance concept as the concept of a language, he insists. In fact, other analogies can be found in the concepts of a hobby, a job, a work of art, an economy, furniture, and many things. Games are no different from these: all are bereft of necessary and sufficient conditions and are knit together only by loose resemblance. The concept of a game is not the strict monolithic concept he used to think. His meta-philosophy is now that the search for definitions is futile in philosophy, and especially where games are concerned. He used to be fooled (“bewitched”) by language into thinking that the concept of a game is a concept unified by a single essence, but now he realizes that it is use that constitutes the meaning of “game”, and we use that word very differently from case to case. He now has a different theory of games in which essence is replaced by varieties of action: chess and football, say, are linked only by a series of loose similarities of behavior at best. Since games are the most important topic in philosophy, so far as he is concerned, LW takes himself to have overthrown the traditional way that philosophy is conducted. He doesn’t take meaning to be so central, because it is narrower than game playing: young children and animals play games but they don’t speak, and speaking is not as important to human culture as game playing. Humans were playing games long before they invented speech, and some scholars have argued that it was games that propelled language into existence (both are rule-governed activities). Language use is really a type of game playing (“language games” he calls it) and so has its roots in that activity. In any case, that has been the trajectory of his thinking on the topic of games over the course of his intellectual life.[1]         

[1] It should be added that LW was wrong about games during both of his periods; the correct analysis was supplied only later by Bernard Suits in his classic The Grasshopper (1978). But we can see why LW came to the views he did—they are not absurd and it wasn’t till Suits stepped in that the concept was defined. What LW made of the work of Ludwig Wittgenstein remains unrecorded; there is talk that he thought that philosopher had his priorities wrong, though his methodology was sound.

Share