On What It’s Like

On What It’s Like

It has become orthodox to state the mind-body problem using the locution “what it’s like”. Consciousness is defined as there being something it’s like. Pain is conscious because there is something it’s like to be in pain (it feels a certain way). I will argue that this is neither necessary nor sufficient; it is also very obscure. We would be better off without it. In fact, it turns out to be more or less vacuous, a mere meme (to put it harshly). Let’s start with an intuitive point; then we will get more analytic. The intuitive point is that the locution only applies to a subset of conscious states—roughly, sensations. There is something it’s like to feel pain and pleasure, to experience tastes, to see colors, hear sounds, and feel shapes; but there is nothing it’s like (no way it feels) to have thoughts, know things, have goals, intend to act, introspect. I don’t think if we began our inquiry with these latter states of consciousness, we would ever have come up with the what-it’s-like formula.  If a person had the equivalent of blindsight for every sense, and hence no sensations in the ordinary sense, he could still be conscious—by thinking, knowing, wanting, intending, introspecting. Why is this? We might turn to the comparative use: pain, say, is like some other sensations and unlike others. Suppose someone had never been in pain and wanted to know what pain feels like; we might reply, “Well, it is a bit like tasting hot peppers—it’s intense, unpleasant, and you want it to stop; quite unlike tasting something sweet or hearing a pleasant melody”. In the same way, seeing red is like seeing orange and unlike seeing blue. These are perfectly meaningful statements, and they have an established use; we might hope that the philosophical use could piggyback on this use. Then, we might go on to suggest that to be exactly like pain a mental state would have to be a pain (ditto for a particular kind of pain, e.g., throbbing). Thus, what it’s like to experience pain is to have a mental state of pain—nothing else will do, though other mental states may approximate. To be a pain (etc.) is to be like…a pain. So, haven’t we found a good sense for “what it’s like”? There are two problems with this approach. First, the same is true of mental states that intuitively there is nothing it’s like to have—ones unlike bodily sensations. Thoughts about philosophy are more like thoughts about physics than they are like desires for a biscuit, and a thought can be exactly like itself. Second, such comparisons apply outside the mental realm, as when we say that one metal is like another but unlike a non-metal. So, we have not captured what it consists in to be conscious.

Can this be fixed? There is an obvious move: to be conscious is to be similar to a sensation, sensations being paradigms of consciousness. To be conscious a state must be like a sensation: this comparison must hold. And now we see the problem: this presupposes the concept of a conscious sensation; we are using the concept of sensation to explain what what-it’s-likeness is. We could do that, but then we are assuming that all consciousness is sensational consciousness; we are assuming that all conscious states are, or involve, sensations. But surely that is not true; sensations are a subset of conscious states not the general essence of them. The what-it’s-like-locution is presupposing that all consciousness is the consciousness that sensations have. The OED defines “sensation” as “a physical feeling or perception resulting in something that happens to or comes into contact with the body”. Surely, there are many conscious states that do not involve such feelings. The philosopher is stretching the word “sensation” if he asserts that all conscious states are literally sensations. Some are and some aren’t. Notice that other attempts to define the notion of consciousness do not suffer from this problem; they have the requisite generality. Thus, definitions in terms of intentionality, privacy, or first-person authority; or definitions that say that consciousness is a cluster concept made up of several notions loosely linked. It is hard to devise a suitably general definition of consciousness, and the what-it’s-like definition stumbles on this reef. We can give examples of consciousness by citing sensations and what they are like, but this won’t give us a general definition applicable to all conscious phenomena. It follows that we can’t formulate the mind-body problem as the problem of explaining what-it’s-like in terms of facts that there is nothing it’s like (brain states, say). The problem isn’t simply the explanation of sensations; this is just one part of the problem. Also, the what-it’s-like locution turns out to be either completely mysterious or equivalent to simple talk of sensations—to being similar to a sensation. This is why people so readily resort to saying that consciousness is a feeling (a feeling where?). That would make the mind-body problem into the problem of explaining feelings.

Why do we talk this way to begin with? Why do we say there is something it’s like to be a sensation but not a thought or item of knowledge? I don’t know; it is something of a puzzle. But I can imagine how such a use got started: we want to know what a given sensation is similar to (is like) because we are wondering whether to seek out such sensations ourselves—but this is not true of other types of mental state. Thus: should I taste this new fruit, say a pineapple? I ask someone who has tasted a pineapple what it tastes like. He tells me it tastes like a grapefruit only sweeter. This gives me some idea, if I am familiar with the taste of grapefruits. I bite into the pineapple. We are naturally interested in the sensations various stimuli produce and we can gain information about this by being informed of similarities to sensations with which we are already familiar. This is not so for other kinds of mental states like thoughts and wishes. The language game of discussing and evaluating sensations has a place for the “like” locution, and philosophers pick it up and (mis)use it for their own purposes. And it is certainly true that if a given mental state is like a sensation, it will a conscious state—though this is not a necessary condition of being a conscious state.

I now want to ask whether the formula is sufficient: is every state S such that, if there is something it is like to have S, S must be a conscious state? Perhaps surprisingly, that is not so—and in a rather obvious way. This is bad news for the what-it’s-like criterion of the conscious. For consider: there is something it is like to have the brain state of one’s C-fibers firing. If your C-fibers are firing, you are in a state there is something it is like to be in, namely pain. C-fiber firing is sufficient for pain what-it’s-likeness, either by identity or lawlike correlation. Yet C-fiber firing is not in itself a conscious state (unless you are an outright identity theorist). The concept C-fiber firing is not a what-it’s-like concept—unlike the concept pain. But it gets worse: there is also something it’s like to have a pin stick in your hand, but this is clearly not itself a sensation with a what-it’s-likeness attached. And the same goes for a great many bodily states and even external stimuli. It is a conceptual truth that pain feels a certain way, but it is not a conceptual truth that C-fiber firing feels a certain way; so, the alleged definition fails. In other words, many physical states are not sensations, though they are sufficient to produce sensations. These physical states are not like (comparative use) sensations.

What should we conclude? It became fashionable a few decades ago to bandy about the phrase “what it’s like” (thanks to the good work of Brian Farrell, Timothy Sprigge, and Thomas Nagel[1]), and there is no denying its heuristic value. But as a strict definition it leaves a lot to be desired (and was it ever intended as a strict definition?). Other definitions also proved unsatisfactory (or unfashionable): intentionality, privacy, incorrigibility, subjective point of view, higher-order belief, nothingness, etc. Perhaps we do better to rely on a cluster of criteria to guide our thoughts and forgo strict definition. I think that we don’t really know what consciousness is—that is, we have no articulable discursive conception of consciousness (we know it by acquaintance alone). This means we don’t know what the mind-body problem is (in that sense of “know”), construed as the consciousness-brain problem. There is no shame in that and it need not hamper our efforts (do you think physicists know what matter is, or energy?). Still, it is always wise to be aware of the limits of our definitions. Intuition is not the same as insight.[2]

[1] The history of the phrase is of some interest. It was first used by Farrell in his 1950 paper in Mind (the year of my birth), “Experience”, but it made no discernible impact; Nagel informs us in The View From Nowhere that he had read the article but forgot Farrell’s use of the phrase that he (Nagel) later made famous—so it made no real impact on him either (!). Sprigge employed it in 1971 three years before Nagel’s “What is it Like to be a Bat?”, but again no impact. I never heard the phrase used in Oxford when I was a student there (1972-1974). I don’t believe it took off even after Nagel’s paper, not for a while anyway. I have never made heavy use of it in my own writings on the mind-body problem, though it seems to work in getting the problem across to students. Why the neglect? I think the answer is fairly obvious: it just isn’t a very penetrating or theoretically useful phrase. It can serve an introductory purpose, but you don’t want place too much weight on it. This is why it didn’t catch fire immediately, way back in 1950. I succeeded Brian Farrell as Wilde Reader in Mental Philosophy at Oxford in 1985, so I am trying to bury a phrase he is responsible for introducing 75 years ago. I rather doubt that my opposition to it will catch on quickly either, so entrenched has the phrase become. The slowness of the philosophical mind.

[2] Several philosophical problems are like this: we know more about the problem than we can say. This is because knowledge comes in two forms—roughly, by acquaintance and propositionally. I think this is particularly true in ethics, but also in epistemology (skepticism) and philosophy of perception (sense-data theories and naïve realism). Some people are better than others at seeing the problem, though no better at stating it. It would be nice to be able to say more about this subject.

Share

Reduction Redux

Reduction Redux

I have been too harsh on reductionism; it really isn’t such a bad thing, correctly understood. It all depends on the kind of reduction. Materialist reduction has given it a bad name, because it is just not plausible (as typically formulated anyway). The OED defines “reduce” as “make or become smaller or less in amount, degree, or size”. That would apply to materialism because it makes the world smaller by reducing everything to the physical: one thing is less than two things. But it also reduces the mind to something it is not—and that is the problem, not the sheer reduction. Even if there were as many kinds of matter as kinds of mind, or even more, it would still be objectionable. If the materialist announced that there are twice as many kinds of mental state than we thought, because of the greater variety of physical states correlated with mental states, that would not lessen its implausibility. This would expand not reduce the quantity of mental states, but the trouble is that the nature of mental states is not capturable by physical states. Reduction is okay; bad reduction isn’t. Reductionism is not intrinsically bad. We already knew this: nobody recoils at reducing water to H2O or heat to molecular motion, even though we have reduced the world from two natural kinds to one in each case. Newton reduced terrestrial and celestial motion to a single kind of motion (with tidal motion thrown in), thereby reducing the number of forces in the world; but that is fine because his theory is sound. It is the same for Darwin’s reduction of animal species to animal varieties, thus reducing the number of ways animals can differ; we don’t need another explanation to explain the origin of species. One is a special case of the other. Reductions can be good, illuminating, and true. Reductionism is perfectly acceptable in its place. One man’s reductionism is another man’s theoretical unification.

There are some harder cases. Was Berkeley a reductionist? True, he reduced the number of basic entities in the world from two to one (matter and mind); but he introduced God into the empirical world and rejected mechanism as a theory of mental causation. He expanded ontology while also contracting it. One might object that his theory is false to the nature of matter, but that is not the fault of his reductionism per se. The trouble with reductive idealism is not that it is reductive but that it doesn’t correctly capture what it tries to reduce. What if he tried to reduce everything, including us, to the contents God’s mind? That is highly reducing, but it doesn’t strike us as failing to do justice to our nature, as materialism does. For the contents of God’s mind can be arbitrarily expansive; putting us in there doesn’t do violence to our evident nature. Is Russell’s theory of descriptions reductive? Yes, in that it replaces definite description with quantifiers, thus reducing the number of primitive expressions; but it doesn’t elicit the response that it is untrue to the meaning of “the”. By contrast, Thales’ “All is water” seems incredibly reductive, because so homogenizing—while “All is atoms” seems quite reasonable. Did we reduce Hesperus to Phosphorus? Yes and no: we got rid of one thing by identifying it with another, but it would be weird to say we reduced Hesperus to Phosphorus (why not the other way about?). Did we reduce stars (some of them) to planets? Did we reduce the Moon to a barren satellite of Earth? Is the true justified belief theory of knowledge a reduction of knowledge? Does the truth conditions theory of meaning count as a reduction of meaning? What about the image theory? Is possible worlds semantics a reduction of modal notions? Etc. These questions seem futile; the only issue is whether the theories in question are plausible. Say what you like about reduction; truth is what matters. The whole idea of reductionism seems empty and pointless. Is it good or bad? It depends, and anyway the real question is independent of that. The idea of reduction should not play a role in the relevant discussions. Certainly, it is not inherently a derogatory term. There are nice ones and nasty ones, that’s all.

What about the idea of irreducible entities? Suppose I say that colors are irreducible: they are not physical properties or dispositional properties or mental properties, but simple primitive properties in their own right. They have no analysis, no hidden structure, no real essence—they are what they are and no other thing. Isn’t that pretty reductive? I am denying them complexity, depth, an underlying nature. I am saying they are lessthan other properties, not as complicated, more one-dimensional. What if colors were traditionally regarded as like natural kinds with a hidden real essence—wouldn’t it seem reductive to say that they are no such thing but entirely superficial? Wouldn’t that be received as denying them their due as natural kinds? What if we said the same thing about water? It depends on expectations. If I said that colors are like primitive simple sensations with no further reality than their appearance, wouldn’t that sound reductive—surely colors have some sort of hidden nature. How do they become attached to objects along with size and shape? Don’t they need something to tie them down to material things, as physical properties and dispositions do? Irreducibility claims can sound pretty reductive in their way, because they deny depth.[1]

Here is a final tricky case. Suppose I grow suspicious of the soul as depicted in religious discourse (immaterial, immortal, possibly disembodied, supernatural). I propose that the soul is reducible to the person, construed as a psychophysical entity or as ontologically primitive. Then someone comes along and argues that the person is really reducible to psychological connectedness, calling himself a reductionist about persons.[2] But then this is deemed suspect because too divorced from the animal nature of persons; it is proposed that human beings are (just) animals of a certain biological species. And then it is suggested that even the concept of animal is too divisive; better to speak of “organisms” so that we don’t draw too sharp a line between animals and other living beings (worms, amoebas, bacteria, plants). But that is thought not quite reductive enough: isn’t an organism reducible to a collection of organs? Thus, the soul is reducible to the organs of the body. Is the concept of reduction doing any useful work here? Isn’t it introducing merely verbal quibbles into the discussion? The real question is whether any of these identifications are true. Asking whether they are “reductionist” cuts no ice. The term has outlived its usefulness. Being a reductionist is neither good nor bad in itself, merely meaningless; similarly, for being an anti-reductionist. It is more rhetoric than ratiocination.[3]

[1] What if I said there is nothing more to the ocean than its surface—a primitive property?

[2] This was Derek Parfit’s own self-description.

[3] And yet it has dominated philosophy of mind for lo these many years. You are either a reductionist or an anti-reductionist. I might be described as a “mysterian reductionist”, but how does that differ from believing that mental states have an unknown nature? It certainly isn’t the same as saying that mental states are less than they seem.

Share

Sensible

Hi Colin:

Happy New Year!
As you know, I’m a fan of your blogs, save them all.  Two I find especially relevant for the Landscape of Consciousness, as they are original and insightful, and encourage new ways of thinking. I took each of the blogs and framed them in Landscape style, using your words as quotes, and giving the references and URLs (and footnotes).
If you’d like any changes, edits, etc, very easy to do – please send me a redline or notes.
You now have three theories/positions on the Landscape. The only other person to have three is Dennett. (Several others have two.)
Warm regards,
Robert
Share

Living Landscape of Consciousness

Explore Berggruen Prize Essays

Home / Materialism / Phylogenetic/Evolutionary / McGinn’s Living Consciousness

McGinn’s Living Consciousness

Elementary consciousness does not exist in all things, but it does exist in all organic things. The mind is confined to animate things. There are traces of it in all living tissue, but none in anything else. The mind is pan-biological but not pan-physical. Organic tissue is prone to developing mentality, but the same is not true of inorganic objects. Being organic is a precondition of consciousness; it disposes things to having minds in the full sense.

McGinn’s Living Consciousness

Philosopher Colin McGinn hypothesizes that “elementary consciousness does not exist in all things, but it does exist in all organic things.” He rejects panpsychism as “the doctrine that elements of mind exist in all physical things, down to atoms and their constituents” because “we don’t see inanimate things tending towards mentality, despite their alleged quota of it,” stressing, “The mind is confined to animate things.” He asserts, “There are traces of it in all living tissue, but none in anything else. The mind is pan-biological but not pan-physical. Organic tissue is prone to developing mentality, but the same is not true of inorganic objects.” (McGinn, 2025e, following).

McGinn’s “Living Consciousness” theory states, “Being organic is a precondition of consciousness; it disposes things to having minds in the full sense. We don’t know how or why, but that seems to be the natural trend. Organic tissue is mysterious in this way. In the brain, organic tissue reaches its mental apotheosis, while rocks remain sub-mental. There is proto-mentality in your feet, a faint throb of what can become a full-fledged mind. It is the organic animal body that provides the cradle of mentality.”

McGinn works to undermine panpsychism. “We already know that not everything contains mentality in some form, even for the staunchest panpsychist. Not numbers or empty space or universals or the Good or geometric forms; mentality can’t live just anywhere.” Can the panpsychist explain why, McGinn asks? “Not that we have heard,” he responds rhetorically. His point is to refute the claim that the pan-biologist is being arbitrarily selective, “whereas the pan-physicalist is free of that vice.” Both, he says, “are selective in their own way. In fact, there is room for all sorts of restrictions on the general form of the doctrine of mental ubiquity: you might say atoms have mentality but not the elementary particles that compose them, or only physical things of a certain size and mass, or only organic tissue of certain sorts (not bone, say), or only tissue that has blood flowing through it. It is an empirical question. The evidence is that mentality is associated only with the organic—the correlation is unmistakable. It is a matter of detail precisely where it finds a home. The picture is that matter undergoes a kind of revolution in forming animal bodies, the result of which is the upsurge of consciousness of varying types and degrees. There is nothing simple or all-or-nothing about this.”

McGinn probes the point. “Could some types of biological tissue be closer to overt consciousness than other types of tissue–more packed with the stuff? Is it the amount of blood being pumped through it? Is consciousness blood-consciousness? Blood does seem remarkable in its powers and curious in its composition. There is no consciousness at all in hair and fingernails but plenty in the heart and lungs, according to this view. Is some neural tissue more charged with consciousness than other neural tissue, given that some is conscious and some is not? Is the heart more conscious than any other organ of the body except the brain?”

McGinn, enjoying his speculations, muses, “Fanciful, no doubt, but are such speculations beyond all reason? What would we discover if we had a consciousness microscope? Given the general shape of panpsychist doctrines, all sorts of possibilities present themselves. I rather fancy the idea of a consciousness hierarchy existing in the body, with the liver at the bottom and the brain at the top—with hair and fingernails not even in the running. Perhaps there is a correlation between organic complexity and degree of consciousness (or proto-consciousness)—whatever we might mean by complexity. It’s all terribly mysterious, no doubt, but is it beyond the possibilities of nature? Nature has surprised us many times and continues to do so. So, I suggest exploring the varieties of panpsychism and entertaining the idea of a panpsychism confined to the organic world. Doesn’t it feel right to limit mentality to the organic? We have underestimated the discontinuity between the animate and inanimate” (McGinn, 2025e).

McGinn concludes with a meta-self-reflection, “I freely admit I am venturing out on many limbs here, with analytic philosophy left far behind. So be it.”

Tools

Share
El quote rightCite
Download PDF

Categories

Materialism
Phylogenetic/Evolutionary


References

McGinn, 2025eColin McGinn

Living Consciousness
Colin McGinn Blog, December 19, 2025

References

McGinn, 2025
Colin McGinn
Living Consciousness
2025 e
Colin McGinn Blog, December 19, 2025

Share

Landscape of Consciousness

Explore Berggruen Prize Essays

 

Home / Materialism / Relational / McGinn’s Brain Perception

McGinn’s Brain Perception

Consciousness is perceiving your own brain. This isn’t because all consciousness is a brain state; it’s because all consciousness is brain perception. Consciousness is brain awareness—awareness of the brain. All consciousness is consciousness-of… the brain. This can be stated as an identity theory: mental states are identical to perceptual states of brain awareness.

Photo of Colin McGinn

Colin McGinn

Philosopher

Colin McGinn is a British philosopher known for his work in the philosophy of mind, especially his theory of “new mysterianism” regarding consciousness. He has taught at Oxford and Rutgers and has authored over 20 philosophical books. His blog is a masterpiece of philosophical insights — https://www.colinmcginn.net/blog/.

McGinn’s Brain Perception

Philosopher Colin McGinn says he is “going to adumbrate a new theory, quite an eye-stinging one. It says that you perceive your own brain.” Consciousness is perceiving your own brain. This means, to be more specific, “pain is the perception of C-fibers firing. It isn’t C-fibers firing themselves, but the perception of that.[1] The relation between pain and C-fibers is like that between seeing a dog and the dog: they are numerically distinct and yet closely entwined. The sensation of pain has a perceptual object and it’s in the brain. And not just pain: visual sensations, too, are perceptions of brain states (perturbations of the occipital cortex). When you see a dog, you also see your brain, or a bit of it. In fact, all consciousness is brain perception” (McGinn, 2025f, following).

Even all thinking, McGinn says, “is perceiving (sensing) your own brain. This isn’t because all consciousness is a brain state; it’s because all consciousness is brain perception. To put it with maximum provocativeness, consciousness is brain awareness—awareness of the brain. All consciousness is consciousness-of… the brain. This can be stated as an identity theory: mental states are identical to perceptual states of brain awareness. I don’t say only such awareness; rather, they are brain perception plus awareness of other things (if they have intentionality, that is). They have a kind of double intentionality: of the out-there and the in-here. You sense your environment and you sense your own body in the person of the brain. You have a dual awareness. If your mental states are states of your brain, then we can say that your brain senses itself: pain, say, is a state of your brain that is identical to a perception of your brain. On the other hand, if pain is a state of an immaterial substance, then it is a state of an immaterial substance that is identical to a perception of your brain. Thus, your mind has a relational structure: it stands in the relation of perceiving to your brain—as well as to other objects. If you have a tactile relational perception of an external object, you also have a tactile relational perception of your brain (the tactile part of it)—you touch your own brain, to put it crudely.[2]

McGinn addresses what one might sense as an obvious problem with his theory: “people can have minds without knowing much if anything about brains. For surely, we don’t perceive anything as a brain when we enjoy ordinary experiences. I don’t feel that my C-fibers are firing when I am in pain. But that is not what the brain perception theory says; it says only that I am aware of my brain, not that I am aware that my brain is doing such-and-such. The awareness is de re not de dicto: it is true of my brain that I am aware of it—not that I am aware of it under a brain description. Perceptual statements have a de re/de dicto ambiguity, and the brain perception theory endorses only the de re reading. You can be conscious of something x that is actually F without being conscious that x is F. Compare ordinary visual perception: you can see (be looking at) an object that is in fact a block of atoms without seeing it as a block of atoms. We are aware ofcollections of atoms all the time but not as collections of atoms. We can be aware of objects that satisfy all manner of descriptions without knowing these descriptions or otherwise mentally representing them…. Thus, the brain perception theory is only claiming that we have de re perceptions of our brain states; it isn’t that we get mental images of our brain whenever we have an experience—or that our brains even cross our minds” (McGinn, 2025f).

How does McGinn defend the idea that the relation between mind and brain is one of perception? “One reason is that we get a nice uniform account of the nature of the mind: all mental phenomena are perceptions of the brain—this is what they are (the essence of the natural kind)… But second, and more subtly, it is part of the phenomenology of experience to sense an inner reference: we feel that our mental states are somehow inner. We certainly don’t feel them as outer, and I don’t think we are neutral on the question; we feel that they belong with us, internally. I don’t think my mental states might be outer; they strike me as definitely inner. But what kind of inner?”

McGinn sets a contrast with the “inner” as “an immaterial substance—the Cartesian ego.” If that were so, he says, “we would be in a perceptual relation to the states of such a substance, according to the perceptual theory of the mind. But we have discovered that it is the brain that houses and services the mind, so that entity is a better choice of perceptual object. If mental states are perceptions of something internal, as they seem to be, then the brain is the best candidate.”

Going further, McGinn asks, “But why suppose that these objects are perceived? That turns on what perception is, a question hitherto left dangling. The answer, I suggest, is that perception is basically a matter of a response to a stimulus—an action of registration or tracking or indicating. The mind is tracking the brain, recording its doings; the two are reliably correlated. The mind, we might say, senses the presence of the brain; not de dicto, to be sure, but de re. The brain causes the mind to respond in certain ways, such that you can read one off from the other. The mind perceives the brain in the sense that it is sensitive to what the brain is up to—as the senses are sensitive to what the environment is up to. To a well-informed intelligence, the mind would provide information regarding the brain. The mind senses changes in the brain, but only as changes in something internal, not as neurological changes—which they actually are.”

Assuming this idea sinks in, McGinn argues that “it seems very natural to speak of the mind as perceiving the brain; it is keeping tabs on the brain, resonating to its activities, albeit in a sort of code. Consciousness acts like a secret code for the brain, a kind of translation. The brain is encoded in the mind, as the external world is encoded in sensory experience. Thus, it is natural to speak of mental states as perceptions of the brain—ongoing (partial) reports on it. Pain is the code word for C-fibers firing, if only we could crack the code.[3] If one day we manage to decode the code, we will naturally think of our experiences as messages from the brain; and we might become very adept at this and regard consciousness as we now regard vision in relation to the environment. The concept of perception is fairly elastic, so we might find ourselves happily using it to talk about our states of mind (‘I felt my frontal cortex to be unusually active this morning’, ‘My hypothalamus feels slow today’). We might even become able to selectively attend to certain regions of the brain, as we can with our senses.”

According to McGinn, how should we think about a mental state? “It points in several directions. It points to the external world—this is intentionality: there is a dog over there. It points to the internal world of the brain—this is brain perception: my C-fibers are firing. It points to behavior—this is action: I am about to throw a ball. The mind is about things; it is brain-sensitive, and it is functionally active. Two of these features are very familiar to us. I am adding a third feature: it is brain-perceptive.”

McGhie concludes by finding it “fascinating that the mind is a window onto the brain, another way to ‘see’ the brain. When I see a tree, I can sense my brain fizzing away just below the surface (or I fancy as much). I feel that I can focus on it, get to know it better. I feel closer to my brain now, less alienated from it. My phenomenology has shifted. I have become more brain-centered, existentially[4] (McGinn, 2025f).

Footnotes

[1] It should be noted that this theory is compatible with materialism: the act of perceiving C-fibers firing could be a brain state distinct from C-fibers firing. Pain would then be the brain state of perceiving the brain state of C-fibers firing—a kind of combination of the two.

[2] I say “touch” because a tactile sensation is arguably sufficient to qualify a sense as the sense of touch; you don’t need a physically touching body.

[3] We could devise a code in which causing you a (mild) pain acts as a sign that there is danger nearby. In terms of information theory, pain carries the information that one’s C-fibers are firing (makes it more probable).

[4] It is interesting to ask, in a science fiction spirit, what human life would be like if we knew the brain state corresponding to a given mental state, for all mental states. It would make our consciousness pretty jammed, I’m sure; maybe we are lucky not to perceive our brain in the de dicto way. It’s really best to minimize the content of consciousness for all practical purposes. There might I suppose be a brain pathology in which someone did have sensations of his brain when he experienced anything (The Man Who Mistook His Wife for a Brain).

Tools

Share
El quote rightCite
Download PDF

Categories

Materialism
Relational


References

McGinn, 2025fColin McGinn

Brain Perception
Colin McGinn Blog, December 24, 2025

https://colinmcginn.net/brain-perception/


Footnotes

1.

[1] It should be noted that this theory is compatible with materialism: the act of perceiving C-fibers firing could be a brain state distinct from C-fibers firing. Pain would then be the brain state of perceiving the brain state of C-fibers firing—a kind of combination of the two.

2.

[2] I say “touch” because a tactile sensation is arguably sufficient to qualify a sense as the sense of touch; you don’t need a physically touching body.

3.

[3] We could devise a code in which causing you a (mild) pain acts as a sign that there is danger nearby. In terms of information theory, pain carries the information that one’s C-fibers are firing (makes it more probable).

4.

[4] It is interesting to ask, in a science fiction spirit, what human life would be like if we knew the brain state corresponding to a given mental state, for all mental states. It would make our consciousness pretty jammed, I’m sure; maybe we are lucky not to perceive our brain in the de dicto way. It’s really best to minimize the content of consciousness for all practical purposes. There might I suppose be a brain pathology in which someone did have sensations of his brain when he experienced anything (The Man Who Mistook is Wife for a Brain).

 

References

McGinn, 2025
Colin McGinn
Brain Perception
2025 f
Colin McGinn Blog, December 24, 2025
Google Scholar

Share

A (Really) Brief History of Knowledge

A (Really) Brief History of Knowledge

This is a big subject—a long story—but I will keep it short, brevity being the soul of wisdom. We all know those books about the history of this or that area of human knowledge: physics, astronomy, mathematics, psychology (not so much biology). They are quite engaging, partly because they show the progress of knowledge—obstacles overcome, discoveries made. But they only cover the most recent chapters of the whole history of knowledge—human recorded history. Before that, there stretches a vast history of knowledge, human and animal. Knowledge has evolved over eons, from the primitive to the sophisticated. It would be nice to have a story of the origins and phases of knowledge, analogous to the evolutionary history of other animal traits: when it first appeared and to whom, how it evolved over time, what the mechanisms were, what its phenotypes are. It would be good to have an evolutionary epistemic science. This would be like cognitive science—a mixture of psychology, biology, neuroscience, philosophy, and the various branches of knowledge. It need not focus on human knowledge but could take in the knowledge possessed by other species; there could be an epistemic science of the squirrel, for example. One of the tasks of this nascent science would be the ordering of the various types of knowledge in time—what preceded what. In particular, what was the nature of the very first form of knowledge—the most primitive type of knowledge. For that is likely to shape all later elaborations. We will approach these questions in a Darwinian spirit, regarding animal knowledge as a biological adaptation descended from earlier adaptations. As species and traits of species evolve from earlier species and traits, so knowledge evolves from earlier knowledge, forming a more or less smooth progression (no saltation). Yet we must respect differences—the classic problem of all evolutionary science. We can’t suppose that all knowledge was created simultaneously, or that each type of knowledge arose independently. And we must be prepared to accept that the origins of later knowledge lie in humble beginnings quite far removed from their eventual forms (like bacteria and butterflies). The following question therefore assumes fundamental importance: what was the first type of knowledge to exist on planet Earth?

I believe that pain was the first form of consciousness to exist.[1] I won’t repeat my reasons for saying this; I take it that it is prima facie plausible, given the function of pain, namely to warn of damage and danger. Pain is a marvelous aid to survival (the “survival of the painiest”). Then it is a short step to the thesis that the most primitive form of knowledge involves pain, either intrinsically or as a consequence. We can either suppose that pain itself is a type of knowledge (of harm to the body or impending harm) or that the organism will necessarily know it is in pain when it is (how could it not know?). Actually, I think the first claim is quite compelling: pain is a way of knowing relevant facts about the body without looking or otherwise sensing them—to feel pain is to have this kind of primordial knowledge. To experience pain is to apprehend a bodily condition—and in a highly motivating way. In feeling pain your body knows it is in trouble. It is perceiving bodily harm. Somehow the organism then came to have an extra piece of knowledge, namely that it has the first piece, the sensation itself. It knows a mode of knowing. Pain is thus inherently epistemic—though not at this early stage in the way later knowledge came to exist. Call it proto-knowledge if you feel queasy about applying the modern concept. We can leave the niceties aside; the point is that the first knowledge was inextricably bound up with the sensation of pain, which itself no doubt evolved further refinements and types. Assuming this, we have an important clue to the history of knowledge as a biological phenomenon: knowledge in all its forms grew from pain knowledge; it has pain knowledge in its DNA, literally. Pain is the most basic way that organisms know the world—it is known as painful. Later, we may suppose, pleasure came on the scene, perhaps as a modification of pain, so that knowledge now had some pleasure mixed in with it; knowledge came to have a pain-pleasure axis. Both pain and pleasure are associated with knowledge, it having evolved from these primitive sensations. This is long ago, but the evolutionary past has a way of clinging on over time. Bacterial Adam and Eve knew pain and pleasure
(in that order), and we still sense the connection. Knowledge can hurt, but it can also produce pleasure.

Notice that the external world has not yet come into the picture. There is as yet no knowledge of material objects in space, so the first knowledge precedes this kind of knowledge (subjective knowledge precedes objective knowledge). But it is reasonable to suppose that the next big stage in the onward march of knowledge—the age of the dinosaurs, so to speak—involves knowledge of space, time, and material bodies (the “Stone Age”). I mean practical knowledge not advanced theoretical knowledge—knowing-how, as we now describe it. The organism knows how to get about without banging into things and making a mess. We could call this “substance knowledge”. How pain knowledge led to this type of knowledge we don’t know; what we do know is that it marked a major advance in the power of knowledge, because it introduced the subject-object split. Now knowledge has polarity built into it: here the state of knowing, there the thing known. In the pain phase such a division did not exist in res, but when external bodies came to be known knowledge distinguished itself from the thing known. That is, perception of the external world involves a subject-object split. Distant things are seen and heard. This division was already present in plants as they orient themselves to external objects—the sun, water, the earth. But they don’t know these things, though it is as if they do; it took pain (and pleasure) to convert this kind of directedness into knowledge proper. If trees felt pain, they might well be perceiving subjects, given their tropisms and orienting behavior. So, let’s declare the age of sense perception the second great phase in the development of knowledge on planet Earth. The two types of knowledge will be connected, because sensed objects are sources of pain and pleasure: it’s good to know about external objects because they are the things that occasion pain or pleasure, and hence aid survival.

I will now speed up the narrative, as promised. Next on the scene we will have knowledge of motion (hence space and time), knowledge of other organisms and their behavior (hence their psychology), followed by knowledge of right and wrong, knowledge of beauty, scientific knowledge of various kinds, social and political knowledge, and philosophical knowledge. Eventually we will have the technology of knowledge: books, libraries, education, computers, artificial intelligence. All this grows from a tiny seed long ago swimming in a vast ocean: the sensation of pain. From “Ouch!” to “Eureka!”. We go to universities because our distant ancestors felt pricks and pangs: one sort of knowledge led to the other after a brief period of time (by cosmic standards). A super-scientist might have seen it coming (“It won’t be long before they have advanced degrees and diplomas”). The point I want to stress is that this is a natural evolutionary process, governed by the usual laws of evolution–cumulative, progressive, opportunistic, gradual. As species evolve from other species by small alterations, so it is with the evolution of knowledge; there is no simultaneous independent creation of all the species of knowledge. Knowledge-how, acquaintance knowledge, propositional knowledge, the a priori and the a posteriori, knowledge of fact and knowledge of value, science and common sense—all this stems from the same distant root (though no doubt supplemented). It was pain that got the ball rolling, and maybe nothing else would have (pain really marks a watershed in the evolution of life on Earth). Knowledge of language came very late in the game and is not be regarded as fundamental. Epistemology is much broader than language. Knowledge has all the variety and complexity we expect from life forms with a long evolutionary history. Quite a bit of the anatomy of advanced organisms is devoted to epistemic aims–the eyes, the ears, the nose, the sense of touch, memory, thought, and so on. Knowledge is not a negligible adaptation. Yet it must have comparatively simple origins. It didn’t arise when a human woke up one bright morning and felt a love of wisdom in his bosom. It arose from primitive swampy creatures trying to survive another day.

I will make one further point: knowledge, like life in general, is a struggle with obstacles. Survival isn’t easy, and nor is knowledge. In both there are obstacles to be overcome, resistance and recalcitrance to be fought, battles to win or lose. Knowledge is hard: you know it don’t come easy. It’s a difficult task. Those books about the history of science draw this lesson repeatedly—it wasn’t easy to figure out the structure of the solar system or the laws of genetics. But that is part of the very nature of knowledge as an evolved capacity—the struggle to be informed. The organism needs to know if it is in danger, so pain came along; we would like to know whether the Earth is the center of the universe, so astronomy was invented. Knowing is the overcoming of obstacles, like the rest of evolved life. Knowledge was born in pain and struggle. It is not for the fainthearted. This is epistemology naturalized.[2]

[1] See my “Consciousness and Evolution”, “The Cruel Gene”, “Pain and Unintelligent Design”, and “Evolution of Pain”.

[2] Quine talked about epistemology naturalized, eschewing (his word) traditional epistemology. I am not eschewing anything; I am adding not subtracting. I want to acknowledge the biological roots of knowledge, finding knowledge in nature (it’s not about schools and examinations). Books are recent accessories. The very first knowledge is an organism feeling pain for the first time: it hurts but at least it gains valuable information. Eventually, organisms grow to love knowledge—we become scholars of reality. The pain is a distant memory. Still, if you read the book of knowledge (chapter 3 of the Book of Life), you find a footnote to primordial pain.

Share

Knowledge and Time

Knowledge and Time

I shall make some remarks about a topic neglected by epistemologists—the relationship between knowledge and time, particularly future time. The relationship is not simple or easily grasped; there is a reason for the neglect. I will try to keep it as uncontroversial as possible; this is to be preliminary groundwork. Truisms not breakthroughs. The big question is this: Can we know, perceive, refer to, or have justified beliefs about the past, the present, and the future? I think it would be generally agreed that we can and that we have these relations to the past and the present—but to the future not so much. There might be some argument about whether we perceive the past, or even whether we perceive the present: with what sense do I perceive what happened yesterday, and what about the time lag between the event perceived and the perception of it? Don’t we infer the past from our present memories of it, and don’t we really perceive an event in the past given the time it takes for a perception to be formed? But in the case of the future there is really no doubt that we don’t perceive it; the question has been whether we can still know it. For perception requires causation and causation never runs from the future to past: what happens tomorrow cannot cause what happens today, in the mind or elsewhere. This familiar point is surely correct and scarcely disputable, but it needs to be fully absorbed: we cannot be acquainted with the future; we cannot directly apprehend it; we cannot be consciously aware of it. We cannot know it in the perceptual sense; we can only know it, if at all, by inference. It is necessarily imperceptible. In the case of the past, we can know it directly (by memory of past perceptual encounters) or by inference from these, but we can never know the future in this direct way—because we can never perceive it. You can’t now see what will happen tomorrow, no matter how much you strain your eyes—light doesn’t travel into them from the future. So, this basic source of knowledge is completely unavailable to us, in principle and forever. Given that perceptual knowledge is the basis of all knowledge, the question must then arise as to whether we can know anything about the future. Isn’t it just too cut off to be known? Aren’t we limited to mere guesswork, chance truth, accidental match? And these are not instances of knowledge. Shouldn’t we be absolute agnostics about the future? What you can never see you can never know. The case is even worse than other minds, because at least in that case we have causal relations between object and subject: the other’s mental states cause my mental states via his behavior, which I see with my own eyes. But the states of the future can never cause mental states in me, or any other present states. The past is not an effect of the future, as the future is an effect of the past. Thus, I cannot know the future by perceiving it, or any part or sliver of it, however indirectly or remotely. I am completely shut off from it. We are separated by an epistemological wall, based on a metaphysical necessity.

But it doesn’t follow that we can’t have true justified beliefs about the future: so, can we? Let me first note that if we can it won’t follow that we can have knowledge of the future (knowing-that). It would follow only if knowledge is, or can be, true justified belief; but the future provides a clear counterexample to that analysis. Suppose I have a true justified belief that war will break out tomorrow: do I thereby know it will? Intuitively, no: I don’t know this fact, I only believe it.[1] I think it will and for good reason, but I don’t really know it—not like I know that there’s a cat in front of me. I am not acquainted with that future war. We would be perfectly within our rights to deny that anything about the future is ever known, even if we allow that we can have reasonable true beliefs about the future. And indeed, I venture to suggest that this is the common opinion: the future is not knowable—though it is conjecturable. You can have beliefs about it, but these don’t amount to cases of knowledge. So, the JTB analysis of knowledge is insufficient (this is a kind of Gettier case, in effect). But we can still ask whether such beliefs are ever justified, discounting the knowledge question. Now this is extremely well-trodden territory, which I don’t intend to re-tread. I will make two points about it. First, this is not the “problem of induction”: that problem is not inherently about the future; it can apply to both the past and the present. Were all past swans white and are all present swans white? The problem of induction is about generalizing from a sample to a whole population, not about inferring the future from the past. The second point is that induction is the only way we can know about the future, since we are perceptually closed to the future. We can perceive the past (perhaps we always do) and we can perceive the present (pace the time-lag argument), but we cannot as a matter of necessity perceive the future. This puts belief about the future in a much worse position than belief about the past and present, since we don’t know even what it would be to perceive the future. What would it even feel like? If an alien had such a sense, would we be able to grasp its phenomenology? On top of that we have the problem of induction itself, which strikes even regular people as problematic. True, we reflexively form expectations for the future (as Hume famously observed), but this has nothing to do with reason; we would have such reflexes whether the future resembles the past or not. Induction is notoriously difficult to justify. At best future predictions are perilous and indemonstrable. You don’t have to be a skeptic to feel that we are deeply ignorant of the future (you have never been there); indeed, this is hardly worth calling skepticism, since ordinary folk are already queasy with talk of justification and knowledge regarding the future (this is not so for the past and present). In sum: we have neither perceptual knowledge of the future nor solid justification of beliefs about the future—just instinct, conditioning, and blind faith. This is why there is no history of the future—no narrative of what will happen. One might be forgiven for supposing that the future is not a fit object of human knowledge; we just talk as if it is for pragmatic reasons. Strictly, we shouldn’t even have beliefs about the future, since belief presupposes justification; we should only have attitudes of surmise and speculation (good Popperians about what will be). At any rate, that is a position with an intelligible rationale.

This problem has always haunted science, because science purports to be predictive—and yet empirically warranted. Empiricism bases knowledge on experience, but we don’t experience the future; it ought then to be out of epistemic bounds. I have no “impression” of the future events I predict, so how can I know about them? How then can an adequate philosophy of science be empirical? Popper took a radical line; others have suggested that science doesn’t make factual predictions but is only a useful tool for getting along in the world. But it is always future-oriented and hence open to criticism from a consistent empiricist. History has no such problem—or logic and mathematics and philosophy. The epistemology of science has therefore always been under a cloud, as exceeding what can be humanly known. Hume was well aware of this (Popper made a big deal of it). Proust wrote a long book called Remembrance of Things Past but not Expectations of Things Future, because there is so little to say under the latter head; there is no madeleine of the future. This is our human epistemic predicament and the source of much of our anxiety (it isn’t only death). We can describe the future and fear it, but we can’t know it—not really. We can know (sic) the future only by using the past in conjunction with induction, but induction is eminently questionable, so we are in perpetual doubt about the future. The future itself is terra incognita. At best we go on external signs of it.[2]

[1] See my “Perceptual Knowledge” and “Non-Perceptual Knowledge”.

[2] The idea of the crystal ball is instructive: the only way the future can be genuinely known is by being seen in the shape of a transparent sphere—a portal to the future. Time travel is similar (and equally fictional): we can know the future only by going there and clapping our eyes on it—up close, directly, under our nose. Pure fantasy, of course, but it feeds off our epistemic anxiety concerning the future: the future is the unknown in its purest form, outdistancing even the most remote galaxy or secretive mind. We can “know” it only by comparing it with the past—its very opposite. How could the past ever tell us about the future? Time has no patience with our intellectual limitations. The future is the twilight zone but without any light. That is the terrible truth.

Share

An Answer to the Skeptic

An Answer to the Skeptic

Skepticism gains traction from the true justified belief theory of knowledge, because it can be argued that our beliefs are seldom if ever justified. But that is just one theory, not a datum. What if we adopt another type of theory? I observe, to begin with, that other types of knowledge than knowing-that are not subject to skeptical argument: knowing-how and knowledge by acquaintance. You can’t use a brain-in-a-vat scenario to undermine the claim that we have knowledge-how and knowledge by acquaintance, because these are far removed from what is called propositional knowledge (Russell’s knowledge-by-description). I have knowledge-how if I have a certain ability, whether I can justify the claim that I have this ability or not, even while I am a brain in a vat and have never done the thing in question, e.g., throw a ball. I can also know by acquaintance what red is without knowing whether there is an external world; it just depends on what I have experienced not what beliefs I can justify. These are not evidence-based types of knowledge, so the quality of the evidence cannot be impugned. So, the concept of knowledge is not inherently susceptible to skeptical challenge. But what about knowledge-that?

Suppose we go for a perception-based theory of knowledge not a justified true belief theory: that is, we extend acquaintance to knowledge of facts.[1] For example, I might take myself to know that I am lying in bed, and suppose I am: do I really know that I am? That depends on whether I perceive that I am lying in bed, which doesn’t follow from taking myself to be and this actually being the case. Suppose I do perceive this (the fact of my being in bed causes me to have the experience); then we say I am acquainted with this fact—whether I can justify the belief or not. My knowledge depends only on the facts not on my ability to have true justified beliefs about them. Even if I can’t rule out the hypothesis that I am a brain in a vat, I am still causally connected to the fact in question, if indeed it is a fact. Thus, I can know this fact independently of my ability to justify my beliefs about it: for my knowledge is not based on any such justification; it just arises from my being in a perceptual relation to the fact. Knowing facts by acquaintance (perceiving them) isn’t susceptible to the standard skeptical argument. But if that’s what knowledge is, then we have defeated the skeptic about knowledge: the knowledge exists—whether we can know this or not (we are not discussing second-order knowledge). Perceptual knowledge does not depend on possessing justified beliefs. The skeptic has no argument against the possibility of first-order knowledge of facts based on direct acquaintance.

This point may be conceded, but what about all the putative knowledge that is not based on acquaintance but on inference? What about belief-based knowledge based on evidence and justification? Surely the skeptic can get his fangs into that! Here we might agree but insist that no sensible person has ever supposed otherwise: of course, we can’t know what goes beyond our direct apprehension of fact; we can only surmise. It is a misuse of the concept of knowledge to suppose otherwise.[2] Perhaps we can stretch a point and agree that we might call such belief “knowledge” in a relaxed frame of mind, but it was never really knowledge, as distinct from reasonable speculation, just loose talk for pragmatic purposes. I don’t know that atoms exist, though I might have reason to believe that they do; I only truly know what I myself directly perceive. If so, it was never part of (sensible) common sense to apply the concept of knowledge beyond its proper domain, so the skeptic is tilting at windmills and parading truisms. I know hugely many facts about the external world just by perceiving it, even though there are many things I reasonably believe that don’t count as knowledge. Isn’t that what we normally suppose?

Here the skeptic may retract his horns: okay, he says, but we still don’t have adequate justification for the beliefs we hold, despite the fact that we have a lot of knowledge by acquaintance. To that weakened form of skepticism we can reply as follows. We can simply agree with the skeptic but point out that he has said nothing to rule out knowledge in the areas proper to it; he is talking about something else entirely, i.e., justified belief. But second, we could recommend a comparative notion of justification combined with some absolute cases of it. Thus, I am fully justified in believing I am in pain and I am more justified in believing that eagles fly than that pigs fly. We need not claim that all justifications are created equal in order to rebut the justification skeptic—that would be absurd. Justification comes in degrees of cogency and not all measure up to the perfect case—whoever denied it? So, the justification skeptic has not raised a startling new epistemological challenge that undermines commonsense epistemology. We can all agree that our justifications are pretty shabby, judged objectively, but still maintain that the concept of justification is in good order with useful applications. What we are not going to agree on is that there is no such thing as knowledge of the external world; and the correct concept of knowledge does not invite any such conclusion. So, the skeptic has left commonsense epistemology more or less where it was, not counting those rash epistemological optimists who ought to know better. Sound minds have always known that human knowledge is a more limited affair than has sometimes been advertised; that is not skepticism but realism. Human knowledge: its scope and limits.

In case you think this kind of anti-skepticism is toothless, let me note its consequences for knowledge of other minds and the past. For we can now be said to know facts about other minds and the past: that is, such facts can act as the cause of our perceptual states. I know you are in pain because your pain has caused behavior that I perceive as pain-expressing—that is a fact. I can’t justify my belief that you have a mind to the skeptic’s satisfaction (and not unreasonably), but that doesn’t prevent me from being in a knowledge relation to the fact in question. Similarly, past facts cause current memories, so I know them by something akin to perception (it might even be perception)—even if I can’t justify my belief that there is a past. Thus, I can know facts about other minds and the past by something like perceptual acquaintance, though (arguably) I can’t justify my beliefs about these things. Knowing facts is one thing, justifying beliefs is another. To put it simply, if knowing is seeing, then I know a great many things; what beliefs I can justify is another matter, and may well be shakier than some people have supposed. Since no genuine knowledge is constituted by true justified belief—that is just an incorrect analysis—it is irrelevant to knowledge if adequate justifications for belief are unforthcoming.[3]

[1] See Michael Ayers, Knowing and Seeing (2019); also, my “Perceptual Knowledge”.

[2] See my “Non-Perceptual Knowledge”.

[3] A virtue of the account given here is that it concedes some territory to the skeptic—he isn’t just barking up the wrong tree—but it doesn’t concede his most radical claim, namely that nothing is known about the external world (or other minds and the past). Our alleged justifications don’t really warrant the kind of strong belief we are apt to derive from them, but that has nothing to do with our ability to have knowledge; and indeed, we have a lot of that. Knowledge proper was never about warranted belief. Human knowledge, like animal knowledge, is in good shape, though quite restricted; belief on the other hand cries out for justification and often falls short of it. That’s why some philosophers (e.g., Popper) dispense with it in serious contexts.

Share