Stereotypes

 

 

Stereotypes

 

Wittgenstein warned against the “craving for generality”: the mistaken desire to find uniformity where there is only diversity. Thus philosophers are apt to suppose that all words are names, all sentences are assertions, and all states of mind are experiential—despite a heterogeneity that is evident to unbiased inspection. In a word, they overgeneralize, often based on preconceived ideas. Hence his admonition: “Don’t think, look!” Evidently this is a natural tendency of the human mind, which we need to resist. At roughly the same time Nabokov was inveighing against what he called “generalities”, particularly on the part of historians. He begins his 1926 essay “On Generalities” with these words: “There is a very tempting and very harmful demon: the demon of generalizations. He captivates human thought by marking every phenomenon with a little label, and carefully placing it next to another, also meticulously wrapped and numbered phenomenon.”  He goes on to excoriate historians for indulging in the tendency to find spurious similarities, thus erecting oversimplified “epochs”, “ages”, and “periods”. Walter Lippmann in his 1922 book Public Opinion first introduced the word “stereotype” in its modern sense (before that it was used in the printing industry), i.e. as connoting a tendency towards social overgeneralization and simplistic grouping. Here again the fault lies in not looking, observing and recording, but instead relying on overly simple and homogenizing ideas. Thus, according to these three thinkers, we find people mistakenly supposing (a) that all words are names, (b) that all authors of a certain period are to be classified as (say) Romantics, and (c) that all people of a certain culture or ethnic group have such and such characteristics. In each case we have what we would now call stereotypical thinking; and the problem with it is that it is false and can be very harmful. Yet people keep on doing it: they keep on ignoring the facts, the details, and the individualities, preferring instead the meritless generalization, the lazy grouping, and the prejudiced espousal of alleged commonalities. In some cases this is clearly deplorable, contemptible, and just plain stupid; but human beings seem intent on indulging their craving for generality no matter the cost. It is as if they have a demented love of identity.

            I emphasize the pervasiveness of the phenomenon because social stereotyping is not some isolated and unique failing: it is built into the fabric of human cognition (which is not say it is incurable). It is entirely possible to stereotype animals, fictional genres, popular music, athletes, professors, and even rocks. Overgeneralization and preconceived ideas are the stuff of weak minds. Today there is a tendency to stereotype certain individuals as “powerful white males” and then run away with all sorts of misguided and erroneous ideas about this supposed class. Philosophers and other academics are not immune to this, shameful though it may be.  [1] Often it is done in order to advance a certain political agenda, but it is also just plain lazy thinking and good old-fashioned stupidity. It needs to be identified for what it is and condemned in the strongest terms. Stereotyping is never an acceptable way to think (sic). No doubt it has its biological origins in a need to economize on time—to make thinking more efficient. Fine, but not at the cost of accuracy and justice: because stereotyping people is a vicious and idiotic habit. It needs to be stamped out (is jail time too much to ask?). Children need to be educated strenuously in its fatuity and danger. It has to go.

            Under what circumstances does stereotyping take hold? I think it’s when people feel overwhelmed by the sheer diversity of the world (of course, there can also be emotional reasons, meme transmission, indoctrination, imitation, and the like). If the world is perceived as intolerably complex, people will want to simplify it and make it more manageable: it’s easier to think that every word is a name, every writer from a certain period belongs to a certain “school”, and that every person of a certain appearance is thus-and-so (generally a negative thus-and-so). This is no doubt understandable, but it is not commendable, or even forgivable. In a society like America, which has a very diverse population, the urge to oversimplify and classify is particularly strong: it’s just easier to try to subsume everyone under certain crisply defined categories. The task of education should be to combat this cognitive weakness. This is a type of therapy (as Wittgenstein recommended therapy for the disease of philosophical overgeneralization): people need to be cured of their stereotyping, as if it were a disease. In fact, it really is a type of mental disease—just an exceptionally common one (like the common cold). The first step here is to get a proper sense of its pathological character, its demonic morphology, its onset and progression. A stigma needs to be firmly attached to it. As it is stereotyping gets reinforced and solidified, but it needs to be exposed and ridiculed. But don’t count on academic philosophers to do any of this good work; they seem as prone to it as anyone else, sad to say. It really is a problem, despite its obvious malignity.  [2]

 

  [1] In fact academics are professionally prone to it, because they make a living erecting generalizations, producing taxonomies, and promoting theories; they would feel thwarted if restricted to reporting the facts. Recalcitrant facts are the enemy of the ambitious theoretician.

  [2] This is generally recognized, though people seem oddly blind to it in themselves (they just have betterstereotypes). I am attempting here to describe it succinctly and clearly so that its malign presence can more readily be eradicated.

Share

Ignorance and Solipsism

 

 

Ignorance and Solipsism

 

Is it possible to refute solipsism-of-the-moment? Is it possible to show that there must be more to the world than oneself and one’s current state of consciousness? The standard approach has been to assert that certain propositions about the world beyond consciousness are indubitable—say, the proposition that I have two hands. The thought is that I can know with certainty that certain propositions about the external world are true, so I know that there is more to reality than my current state of consciousness. I want to suggest a different approach, namely that ignorance disproves solipsism: that is, I know with certainty that I am ignorant of certain things. Normally we take this for granted: we accept that there are all sorts of facts that we don’t know—because they are too far away, hidden behind things, too small to see, etc. We accept that the world is a big place and we have very limited knowledge of it. This assumes that the world extends beyond what is currently in one’s field of consciousness: if there is ignorance, then there must be something we are ignorant about. There must be more to reality than what is currently in one’s mind given that one is ignorant of certain things. For if that were all there is, then one would know everything about reality, there being nothing else. Solipsism is incompatible with ignorance; so if there is ignorance, solipsism must be false. The existence of ignorance refutes solipsism.

            That seems hard to deny, but can it be maintained that it is logically possible that there is no ignorance? We think we are ignorant of many things, but the solipsist could try saying that we are not—we know everything about the world by knowing that we exist and that we have such and such states of consciousness. But that seems like an enormous stretch: surely there are things that we don’t know! Is the solipsist committed to omniscience, happily so? That was not part of the original package; we didn’t think solipsism entails godlike knowledge of the entire universe. I thought I was ignorant of the date of a certain battle, say, but in fact I’m not ignorant of this because there was no such battle—there is nothing like that to be ignorant about. There aren’t any battles or people to fight them or places to fight them in. The only facts that exist are facts about my introspectively available consciousness, and I know all those. But still, the solipsist might insist, perhaps it is really so—this is just an unexpected consequence of accepting solipsism. It cures all ignorance! However, things are not quite so simple, because questions can be raised about the solipsistic world that we might not be able to answer. For instance, where do I come from, and what is the cause of my states of consciousness? Do I perhaps come from nowhere at all? I might not be able to answer these questions, which means there are facts I don’t know: there are facts about the universe that go beyond what I can know by introspection. So my self and its subjective states don’t exhaust all the facts. If I sprung from a deity’s act of will, then clearly there is more to the universe than me and my states of consciousness; but even if I just spontaneously burst into existence, this is a fact that is not contained in my consciousness. I am ignorant on the question of my origins, which means that not all facts can be known to me by mere introspection. Maybe there are no objects apart from me, but not every fact is contained within my current consciousness. And if that is so, the solipsist cannot have a complete account of all the facts: being a fact is not identical to being a fact of consciousness. More intuitively, the solipsist cannot consistently accept that ignorance is a fact of life for any knowing consciousness. It is quite true that particular claims to knowledge can easily be mistaken, but claims to ignorance are far more robust—and their truth undermines solipsism. If there are things I don’t know, then there are things other than me—things outside my current state of consciousness. The world cannot be the totality of facts of consciousness.  [1] It is my ignorance of reality that gives the lie to solipsism not my (purported) knowledge of it. Ignorance marks the place where reality diverges from consciousness of it.

 

  [1] I of course mean here facts that are presented to introspective knowledge such as the fact that it now consciously seems to me that I am seeing something red, not facts about consciousness such as the fact that it was brought into existence by a deity or sprung from nowhere. The solipsist’s view is that there are no facts save those that populate the field of consciousness, and this is what is incompatible with the existence of ignorance.

Share

How Things Really Look

 

How Things Really Look

 

I wish to defend the legitimacy of a metaphysical concept that I have not seen discussed before: the concept of what might be called objective appearance. We are familiar with the concept of subjective appearance—the ways things appear to specific organisms with specific sensory faculties. These vary from case to case and may involve distortions, errors, and biases; perceptual illusions fall into the class of subjective appearances. Appearances in this sense are supposed to contrast with objective reality: there is only one objective reality, but there are many ways it can subjectively appear to organisms. Some appearances can be wildly inaccurate; others close to fully veridical. The same physical stimulus can elicit widely divergent types of appearance in different creatures, e.g. bats and humans. There is thus a question as to whether appearance matches reality: does reality appear to us as it really is or only as our contingent sensibility paints it (to use Hume’s metaphor)? This can be a matter of degree: appearances can approximate more or less closely to objective reality. It makes sense to say that one appearance is more veridical than another. But does it make sense to speak of how things objectively appear, i.e. how they appear when perceived as they objectively are? Is there such a thing as ideal appearance—the kind that gets things exactly right? We might picture God as enjoying such appearances: when things appear to God’s mind they appear exactly as they objectively are without any distortion, error, or specific viewpoint. His is a view from nowhere: things appear to God purely in their objective nature. When God sees an object it appears to him as it really is in itself sans any sensory specificity. Thus we arrive at the idea of how things really look: there is the way things look to imperfect terrestrial organisms such as ourselves and the way they look to a being that sees things as they objectively are. We can ask what difference there might be between how things actually look and the way they really look (say to a being like God—though we can drop this heuristic). The way an object really looks is the way it looks to a being that sees it purely as it objectively is—the way it looks in its own being, as it were.

            This is a commonsense idea, at least in its origins. If you see something in the dark you can ask what it looks like when properly illuminated. If you see someone in heavy makeup you can wonder what he or she really looks like without makeup. If you are subject to the Muller-Lyer illusion you can form the idea of what the lines would look like to someone not subject to that illusion (whether or not such a being exists). This is the concept of an ideal perceiver, analogous to the concept of an ideal observer in ethics: an ideal perceiver sees things according to how they really look not how they happen to look in particular circumstances. So there are subjective appearances of the kind ordinary perceivers experience and there are appearances of the kind that an ideal perceiver would experience. It is true that things really look a certain way to given perceivers even when the appearance is subjective, but there is also the notion of how things look to perceivers that see them as they really are.      [1] There is how something looks to me now and there is how it really looks when we exclude all subjective intrusions. This gives rise to philosophical questions such as: “How would the world look if it were seen as it really is?” That is, how would an idealized perceiver experience the world? Would such a perceiver see the world as colored, as Euclidian, as containing discrete objects? How would an ideal perceiver see space? How would such a perceiver see the physical world as described by Einstein? How would the quantum world look? How does the world really look when all subjectivity is subtracted leaving only the naked object? We even have questions like, “What would people look like if we could see into their souls, a la Dorian Grey?” That is, we have the idea of how things would look if we could see them just as they are in themselves. Maybe we never have such experiences, but we can conceive of them—we can apply our concept of vision in such a way as to allow for their possibility. We can conceive, that is, of objective visual appearances: how things would look if the mask were removed, so to speak.      [2]

            We can even ask this kind of question about colors: what does red really look like when you remove its specific appearance to humans? Maybe it looks the way it looks when the perceiver has taken LSD—brighter, deeper, sharper, and more resonant. We ordinarily see colors through our limited visual system (rods and cones, the optic nerve, the occipital cortex), but maybe other perceivers would see them differently, more accurately. An ideal perceiver of color confronted with our perception of color might assure us that there is a lot more to red than we think given our limited perspective on it. This would be the analogue of a dog telling us that there is a lot more to the scent of freshly mown grass than we suppose. The way things really smell far exceeds our contingent olfactory resources, as the way red really looks transcends our impoverished sense of sight. Similarly, the way the world of physical objects really looks is far removed from the way it looks to us, given the truth of relativity and quantum theory. If you could see the world as it really is, you might see far more dimensions to space—that’s what the world really looks like if you have the sense to see it right. There is the way pond water looks to the human eye and there is the way it really looks when you have eyes that can see the microorganisms swimming in it. If there were people with eyes this acute, they would assure us that the way we see things is not the way they really look—any more than actors on a movie screen really look the way they do when so presented.

            The metaphysical point of all this is that we need to replace the appearance- reality distinction with a threefold distinction between subjective appearance, objective appearance, and reality. Subjective appearances are not the only kind; we need the category of objective appearances, whether there are any such items in actual reality or not. But we also need to distinguish objective appearances from reality itself: although objective appearances represent nothing but reality, they are not the same thing as reality. Objects and their properties are never the same as conscious representations of them: here as elsewhere we need to respect the act-object distinction. So we need a robust division between these three items; total reality has a place for all three, none reducible to the others. Centrally, we need to acknowledge a double level of appearance—actual and ideal, subjective and objective, relative and absolute. This bears on the question of idealism: is reality to be identified with subjective appearance or objective appearance? In effect, Berkeley’s idealism is of the latter kind, since reality for him consists in ideas in the mind of God, which are not to be regarded as in any way biased, erroneous, limited, or subjectively tinged. Or it might be maintained that reality consists in a realm of ideal appearances in the mind of no actual conscious being but existing as potentialities (a kind of Platonic idealism). Reality, according to this conception of idealism, isn’t a motley collection of all the subjective appearances enjoyed by actual biological creatures but a far more streamlined and unified set of ideal appearances. We might call this “ideal idealism” or “objective idealism”. It certainly has advantages over the subjective type of idealism, which is wide open to charges of excessive plurality or stipulated favoritism. In any case, we have a new metaphysical option once we accept the category of objective appearances. We also have a new imaginative option: we can try to imagine how the world  would appear if it appeared as it is really constituted. We already do this, to some extent, but we could adopt it as an intellectual project: don’t just tell us how things really are; tell us how they would look if we could see them as they really are. We could call this “real phenomenology”: the phenomenology of reality as such—how reality would present itself to an ideal consciousness. This is the study of objective appearance: not a form of empirical psychology but a philosophical study of a certain ideal subject matter. It is an investigation of how things really look—rarified, no doubt, difficult, certainly, but not outside the realm of possibility.      [3] I would like to read a work of physical phenomenology that describes what electrons look like, or fields of force, or curved space-time: that is, what their objective appearance consists in under idealized conditions. This would enable me to link these things with the world I normally perceive—or explain what kind of alien sense perception would be necessary to perceive them. Think of it as a kind of conceptual empiricism: finding a link between reality and sense perception—though this is not empiricism of the classical type. Some enterprising theorist might even try to resurrect the empiricist view of concepts by proposing that concepts are equivalent to ideal sensory appearances not actual ones. Like possible worlds, objective appearances give us new theoretical entities to play with, opening up new theoretical options. We might even manage to elicit some of those “incredulous stares” of which David Lewis was so fond (I prefer to call them “stupefied frowns”). Do we dare to quantify over these entities? Sure, go right ahead and quantify over them, there’s no harm in that, quantification being a enjoyable diversion; but more seriously we should suppose that reality contains not just objective things and their subjective appearances to sentient beings but also objective appearances that are written into the very nature of things. Reality itself consists of how things are and how things (ideally) look. Even before perceiving beings came along things looked a certain way—ideally, objectively. Eyes just picked up on that fact. Reality can’t avoid appearing a certain way, even if there is no one to appear to. If that sounds paradoxical, consider the fact that square things have a square-like appearance no matter how, or whether, actual perceivers perceive them. That is just how the concept of appearance works.      [4]

 

      [1] Color blindness provides a good example: we say that the color blind fail to see things as they really appear, since they really appear to have colors. Of course, compared to other perceivers normal human color vision might be similarly limited with respect to how things really appear. We might not be sensitive to the full appearance of things (obviously that is true of the reality of things).

      [2] When things look blurry that is not part of how things really look, since things in themselves are not (generally) blurry, this being a feature of a specific sensory apparatus. Objective appearances would not be blurry appearances.

      [3] I don’t rule out the possibility that we can’t know what certain things look like, even though they do look a certain way; we might be imaginatively limited in this regard. Still, we can make a concerted effort to come to know how things look to ideal perceivers.

      [4] I am well aware that this is an especially thorny area, conceptually speaking. We need to recognize and then hang onto certain basic distinctions, and fight against the propensity of language to confuse and bamboozle us. The concept of appearance is by no means simple and straightforward.

Share

Papers and Posts

Let me clarify something: the essays I post here are not really intended as conventional blog posts. They are papers I write as part of my ongoing research. I publish them here because it is easier and quicker than going through the usual publication channels, which for various reasons is not feasible for me now. I publish them for serious philosophical readers across the world and for posterity. I am not interested in commenting on “the profession”, save glancingly. 

Share

Pathological Belief

 

 

Pathological Belief

 

There is something funny about belief: belief isn’t quite right in the head. The human belief system leaves a lot to be desired. Philosophers have been onto this for a while, noting the peculiarities of belief. Early on it was noticed that belief reports are referentially opaque: you can’t substitute co-denoting terms and be guaranteed to preserve truth-value. Someone can believe that Hesperus is the moon’s best friend and not believe that the Phosphorus is. So belief reports don’t have the logical form of a predicate applied to a subject; they are logically anomalous. This discovery provoked a lot of handwringing and even skepticism regarding the notion of belief. Perhaps there is no such thing as belief—that supposition would certainly remove the logical puzzlement belief occasions. And when are two beliefs the same? What makes one belief differ from another? Criteria of identity are sorely lacking. A properly scientific psychology might wish to eschew or otherwise scorn this element of folk psychology (the folk are a primitive and superstitious crowd). We don’t even know whether beliefs are “in the head”, and if they are not their causal powers look distinctly iffy. Then Kripke delivered his puzzle: belief is not only logically problematic; it is positively paradoxical. Pierre believes both that London is pretty and that London is not pretty—and yet he is a perfectly reasonable man. It is belief that is at fault in allowing such contradictory beliefs, not our friend Pierre (he reasons impeccably). If contradiction is not ground for banishment, then what is? Perhaps we should simply stop believing things, since belief is so fraught with logical and conceptual problems. We have stopped believing in specific propositions as human thought has progressed; maybe we should stop believing altogether. Why court paradox and conceptual incoherence? Belief just isn’t a very wholesome commodity, logically speaking; we would be better off without it. True, we would then have no means of assenting to a proposition, but that is a mixed blessing at best. Animals seem to get on quite well without full-blown belief (except those similarly afflicted), so maybe we should take a leaf out of their book. It is human belief that is problematic; other animals have different ways of negotiating the world (without indulging in referential opacity and contradiction-generating assent behavior). Time to refashion the human cognitive system and let belief quietly expire.

            That utopian hope is reinforced by a feature of belief that is less well explored by analytical philosophers, namely its irrationality. Not only is belief opaque and paradoxical; it is prone to the worst excesses of irrationality. People believe the strangest things on the slenderest of grounds: they positively leap at belief without pondering its reasonableness or possibly errant causes. This wouldn’t matter so much if it weren’t for another feature of belief—its connection to action. People act on these wacky beliefs: hard to believe, I know, given their wonky foundations, but lamentably true. Just consider the out-there ideologies that have permeated human culture and the horrific actions they have prompted. History would not be the same if belief were more responsible and controlled. Irrational belief is the cause of most of the atrocities that have marred human history. It is our capacity to believe crazy things (inter alia) that has led to massacres, pogroms, prejudice, religious wars, genocide, and all the other grotesqueries that bring such shame on the human race. Granted, we have some pretty nasty emotions too, and plenty of evil intentions, but it is our ability to believe garbage that really sets us splendidly apart. Our belief system is sorely lacking in proper regulation and rational self-criticism: people will believe anything if you say it enough times, and if it suits them so to believe. Belief is just too malleable, easily manipulated, prone to fantasy, emotion-driven, and just plain bonkers.  [1] It is a biological adaptation riddled with design flaws, faulty wiring, and damaging malfunction—a real lemon. It’s a wonder natural selection let it pass at all! It should have been eliminated long ago—and maybe it will be in due course.

            To get a sense of belief’s failings, imagine if its proneness to error resulted in something like visual illusion or hallucination: whenever you have a false belief about something it looks to you as if reality is that way. You believe that someone is an animal or a devil and lo and behold that’s what they look like—fur, four legs, no clothes, or horns, hooves, a demonic countenance. That is, your belief system intrudes on your visual system so as to make things appear as they are believed to be. This would result in massive visual illusion, a malfunctioning perceptual system, and a potential for accidents on a grand scale. Suppose you believe in ghosts: then ghosts would appear before you all night long. Or you falsely believe your husband is unfaithful and are promptly visited by vivid scenes of marital infidelity. Surely you would want to consult a doctor and get your eyes examined. But in the case of false belief we have a similar level of delusion that fortunately doesn’t commandeer our senses. Still, we might want to consult a belief specialist who can rid us of these wild suppositions and preposterous opinions. The whole problem is that people find their beliefs perfectly reasonable just because they have them, no matter how groundless and absurd they may be. The illusory nature of belief is not written on its surface, so falsehood can survive undetected and uncorrected. This is a dangerous way for a belief system to be. It leads to belief perseverance that is very difficult to curb.

            The problem, evidently, is that beliefs are just too unencapsulated (in roughly Jerry Fodor’s sense): they are far too prone to elicitation by factors quite irrelevant to their truth. Notoriously, beliefs are influenced by wishes and desires: people have a tendency to believe what they want to believe. This is a disastrous way to build a belief system—the very antithesis of what belief production ought to be. Why on earth did the genes ever construct brains that have this grievous flaw? Surely a minimal requirement on a good belief system is that it should notallow for desires to influence the course of belief formation—that’s the last thing that should happen! But that is exactly what the human belief system permits with giddy abandon (I see no evidence that animals are prone to such misfiring). Hopeless! Beliefs should only be formed by processes involving strict adherence to rationality, but in fact they come into existence for the strangest of reasons, or for no reason at all. This is a grievous fault in the whole system—like having teeth that break whenever you bite into something nutritious, or a tail that whips you in the face whenever you wag it. I can’t emphasize this point strongly enough: the human belief apparatus is appallingly designed, a complete mess, an utter balls-up. It needs to be totally overhauled, or simply consigned to the rubbish heap. It is true that it is possible by diligent effort and proper training to avoid the worst excesses of this defective contraption, but why should our brains present us with such a daunting task—which most people decline to undertake anyway? Animals don’t need rigorous drilling in critical thought and rational belief formation, so why are we so lacking? If there were a little white pill that could put an end to our chronic doxastic disease, wouldn’t we swallow it without hesitation? Surely we want to have healthy beliefs, like healthy teeth, and it is clear enough that our beliefs are all too often rotten misshapen embarrassments. I am not exaggerating: take a look at the average person’s belief system—it’s a complete mess in there (a hot mess, as they say). Who among us is sure that his belief teeth are as sound as they should be? Who can be certain that his desires are not exerting undue influence on his beliefs—after all, nothing in the brain is set up to prevent such a thing from occurring?  [2] We are saddled with a deeply flawed psychological apparatus that we are powerless to regulate with any guarantee of success. What we might call “Descartes’ nightmare” haunts us all: that our much-cherished beliefs are riddled with error and are products of irrational forces. For nothing about belief as it exists in humans can preclude large-scale lapses in veracity—beliefs are just too labile, too susceptible to manipulation. Shakespeare’s Othello can be read as a lamentation over the dire state of the human belief system: the title character has his beliefs manipulated and toyed with by a skilled exploiter of the weaknesses inherent in the human belief system. Othello is not a particularly dull or gullible man, but his beliefs are susceptible to influence from other parts of his psyche that have no place in rational belief formation. He represents us all: we are all the victims of a pathetically vulnerable psychological set-up that leaves us at the mercy of hucksters, tricksters, and our own weaknesses. The entire apparatus needs to be radically redesigned, or removed if the problems are too deep-seated. That was certainly the view of Mr. Spock as he bore witness to the frailty of human belief: he exemplifies the proposition that human belief can only be fixed by excising all emotion—an extreme position, no doubt, but one whose force is not lost on us. Humans diverged from other animals psychologically by developing a belief system with no precursor in the animal cognitive system; the result was something with an enormous downside, to put it mildly. Perhaps human language abetted this regrettable development by enabling excessive flexibility in the belief apparatus, in which case language has a lot to answer for. In any case, what we have to live with now is light years away from ideal. We could be forgiven for supposing that human belief is intentionally irrational—and hence intentionally harmful. I repeat: irrational belief is responsible for the worst excesses of human history. Just consider the ill effects of the belief that the white European races are naturally superior to all other races. Case closed. This is all possible only because belief in humans is so prone to error (motivated error, no doubt). If only we could stop Believing!

            This is why I speak of pathological belief: the problem lies in the nature of belief itself, or at least in the way that belief is embedded in the human psyche. It needs badly to get encapsulated, i.e. insulated from outside interference from other parts of the psyche (it needs to be more modular). We could simply cut the fibers linking the belief centers of the brain to the emotion centers (Chief Science Officer Spock would favor simply removing the emotion centers altogether), but one imagines ethical and other footling objections to such an evidently sound plan. Short of that I can only urge greater awareness of the architectural catastrophe that is human belief. We should regard our beliefs with extreme caution, as if they are dangerous animals, being conscious of their deceptive and credulous tendencies: they love to do stupid things and then conceal the fact under a mantle of apparent rectitude. They are not our friends; we should not trust them; we should question them at every turn. Wasn’t that Plato’s main message and Socrates’s constant plaint? We should regard our beliefs as potentially dangerous viruses not as cuddly little pets that will never let us down. There is definitely something funny about belief, and it isn’t funny.  [3]

 

  [1] In this respect it resembles fear, which is also highly labile. We easily acquire phobias that are hard to shed. There should be an analogue notion for belief: types of belief that are wildly excessive and out of sync with reality. This seems to be the state of most political belief.

  [2] It would be nice if there were something analogous to homeostasis in relation to belief—a mechanism that would automatically cool them down when they get too hot. As it is we have something like a positive feedback loop, as beliefs feed off each other to create ever more furnace-like conditions.

  [3] I am hoping that my rhetorical excesses here will be forgiven: it is hard not to get worked up about the perils of belief when one surveys the course of human history (including today). People are just far too in love with their beliefs.

Share

Why Does Consciousness Exist?

 

 

Why Does Consciousness Exist?

 

I mean this question to be a question of biology: what adaptive purpose was served by the evolution of consciousness? Consciousness, like other biological traits, evolved because it contributed to the survival of the organisms that possess it, so there must be an answer to the question of what its survival value is. That is, consciousness has various distinctive properties and among them are properties that aid the survival of the organism (ultimately the genes): what are these properties? We might mention subjectivity and privacy—how do these contribute to survival? No answer suggests itself: would consciousness be less adaptive if it were objective and public? Why does it help a conscious state to perform a vital function that it can’t be known by others or can only be grasped from a particular point of view? So there must be another property or set of properties (perhaps less salient) that perform the necessary adaptive work.  There must be something about consciousness that makes it a worthwhile addition to an organism’s survival equipment.

            One feature of consciousness is familiar from the tradition: it is known about in a peculiarly intimate way. Consciousness is both self-intimating and infallibly known: it reliably informs us of what is going on within it, and when we form beliefs about it we are invariably correct.  [1] Let us say that it possesses the property of epistemic privilege—it is closely hooked into our epistemic faculties. For example, if you feel hungry, you know you feel hungry; and if you think you feel hungry, you do feel hungry. As it is sometimes put, consciousness is transparent—it is available to knowledge in a special way. Is this a happy accident of no biological significance or is it something that plays a vital role in the life of the organism? It may seem like a pointless luxury: the organism gets to know about its states of consciousness quickly and easily—nice for the self-centered organism, perhaps, but where is the biological payoff? The organism feels hunger pangs and immediately knows it—hunger qualia convey their existence swiftly to the organism’s cognitive system. It knows it feels hungry, that its consciousness is active in the hunger department: but this is not all it knows–for if it feels hungry, there is a very good chance that it actually needs food. There is lawful connection between needing food and feeling hungry: the latter strongly indicates the former. So the organism knows it needs food by knowing that it feels hungry. Knowing this it sets about getting food (other things being equal). It’s obviously good for the organism to know it needs food when it does, and the conscious state of hunger clues the organism in to when its food resources are depleted. So there is a two-stage process here: lack of food triggers conscious feelings of hunger, and feelings of hunger trigger knowledge of lack of food via knowledge of the accompanying feelings. If you were designing a functioning organism that is constantly faced with food shortages, it would be sensible to build in a mechanism that generates the necessary knowledge in a reliable manner. The feeling itself does not have the same functional characteristics as the knowledge that results from it, so the knowledge adds something to the organism’s biological resources. Given that such knowledge is desirable, and given that consciousness plays a role in producing it, we begin to see why consciousness might perform a useful biological function. It helps in the task of preserving the organism by informing it of what is going on in its body in respect of food. Obviously the same story could be told about feelings of thirst and dehydration, or about feelings of pain and bodily damage. These kinds of consciousness play the role of somatic monitors, transmitting information to the cognitive centers of the organism. We can see why it would be a good thing to possess the trait in question: knowledge of possibly life-threatening states of the body is clearly useful knowledge to possess. You would want to know when you are about to starve to death or are dying of thirst or are being bitten by a tiger. Here consciousness functions as the body’s guardian and protector—and the body is all the organism has as a vehicle of survival and reproduction.

            But what should we say about other kinds of consciousness, particularly sensory consciousness? These convey information about things other than the body, being directed to the organism’s environment: how can this be explained in terms of bodily preservation via knowledge of the body’s internal states? It can’t, not directly anyway. But the theory we are considering (the “somatic monitor” theory) is not without resources in replying to this natural question. An extreme view, not unheard of, is that perception never acquaints us with external objects; it is only ever directed towards the inner states of the organism. Thus every conscious perceptual act really conveys information about the body, given that there are bodily correlates to every such act. The organism lives in its own world and its sole concern is what is going on inside it. The only thing organisms ever really know about, then, is their own body—and that is no hindrance to survival (why be distracted by what goes on outside?). A second view, less radical and more plausible, is that all perception of the external environment is at the same time self-perception. This is evidently true for touch: when you touch an object you also sense your own body—touch informs you of external objects and of your own physical body (e.g. that your hand is grasping something). Likewise taste and smell bring awareness of bodily sense organs, as when food enters the mouth or aromas enter the nose. Hearing locates sounds in relation to the head and ears—possibly inducing such bodily reactions as blocking the ears from loud noises. The body is never out of the conscious picture, never entirely absent from what consciousness presents to the perceiving organism. Even in the case of vision we see things in relation to the body, this involving awareness of ocular motion, head orientation, and the state of one’s eyelids. I see things in relation to me, i.e. in relation to my body. Thus I may protect my body from seen objects that seem to threaten it—I don’t stare at the Sun, for example. So all the senses are bound up with the body and its parlous condition. Consciousness is always telling us about where our body stands and what might endanger its wellbeing. It evolved with this aim in mind—to keep the body in existence. Consciousness is the (mental) organ that enables the organism to manage its body’s affairs in a hostile world. Lastly, we might adopt an extended phenotype conception of the body, including in its extent the external environment: the organism is aware of its extended body when it is aware of what lies yonder, as when a spider is aware of its web or a beaver its dam. These entities need preserving too if the organism is to pass on its genes successfully; functionally, they are part of the organism’s phenotype. The body doesn’t end at the epidermis.

            Accordingly, the model of hunger and thirst is not wide of the mark generally: these are the primordial forms that consciousness took back in evolutionary history, ultimately in creatures of the deep. Consciousness evolved in fish so as to keep the organism informed about its bodily state, both its subcutaneous organs and its surface features (fins, eyes, gills). It is ideally suited to this job because it has the property of epistemic privilege: it is exceptionally, indeed voluptuously, well known to its possessor. And conscious states are lawfully correlated with bodily conditions, thus yielding their existence and status to the organism’s epistemic faculties.  [2] In consequence the organism knows what is going on within its body, this enabling it to act so as to preserve that body. It knows when its body needs food and water and when it is being dangerously impinged upon. If consciousness were not so closely connected to the epistemic faculties (and we mustn’t be too intellectualist about this), it would not have evolved: for the theory is precisely that consciousness evolved because of its epistemic privileges, combined with the ability of conscious states to indicate the condition of the body. This is the property of consciousness that explains its evolutionary emergence—its ability to pass the test of natural selection by reliably transmitting information about the body to the organism’s cognitive centers.  Consciousness is like a messenger whose message we cannot miss or misunderstand, and whose central subject of communication is news about the body and its perils. It’s what the genes hit upon as a method of keeping track of what is going on internally. Consciousness is part of a biological mechanism designed to enable the organism to manage its body’s survival needs—for example, by monitoring its nutritive state. It is as if the genes decided to solve the problem of monitoring food intake by inventing the sensation of hunger, knowing that this is not likely to be missed by the hard-pressed organism (unlike, say, its actual physiological state of tissue nutrition). It’s a quick and easy way to keep informed about how your body is doing at any given time. It’s not the only logically conceivable way this vital task could be performed, as other biological adaptations are also not logically unique (birds could in principle have flown like helicopters or missiles); but it is the method that arose in the cut and thrust of practical biological evolution—the solution that cropped up at the time. Before that organisms did without consciousness, relying upon more mechanical methods of detecting and regulating their bodily requirements. But consciousness catapulted organisms to a new level of self-monitoring expertise, exploiting the epistemic privilege property of consciousness. This remarkable property is what gave it the advantage compared to non-conscious ways of ensuring that the body has what it needs to survive. So consciousness is really all about the body from an evolutionary point of view, a device for keeping the body safe and well and primed to reproduce. If there were no vulnerable biological bodies to take care of, consciousness would be unnecessary. Consciousness is one of the ways that evolution discovered to keep the body in passable shape; many organisms do without it entirely and get on just fine (consider bacteria or jellyfish).

            A nice thing about this theory is that it brings consciousness resoundingly down to earth: from an evolutionary perspective, it is just another mechanism for ensuring the wellbeing of the body—a piece of machinery for keeping the body well regulated and free of damage. This is a good way to think about all biological adaptations—their raison d’etre is rigorously utilitarian. No trait takes root and is passed on unless it passes the stern test of natural selection—certainly not a trait as widespread as consciousness (sentience, awareness). It is truly biological. What exists within us, making us the thinking and feeling beings we are, is ultimately the result of a blind process that selects for bodily continuity. Preserving the body is what it’s all about, and consciousness does its bit in helping that to happen. It is body-centered, body-obsessed—just like the lungs, the heart, and the kidneys. Its interest in the world beyond the body is distinctly marginal, distinctly derivative. The external world matters only because it impinges on the body—the place where the genes live and have their being. Ultimately, indeed, the focus of the evolutionary process is on the reproductive organs of the body, these being the conduit through which the genes are passed on to future generations. We might even be so bold as to suggest that consciousness is genital-centric, since the genitals are the part of the body whose health is most vital to gene propagation. All the other organs of the body are dedicated to making sure that the reproductive organs get to perform their solemn duty at the highest possible level. If consciousness has anything to do with the soul, that is strictly a superfluous accretion, not part of the basic biological story—which begins with the fish, the insect, and the reptile. Consciousness is fundamentally all about ensuring that the body makes it through another day; and it achieves this aim by deploying its uncanny ability to be known with special intensity.  [3]

 

  [1] Of course, some people dispute this, and some qualifications need to be appended, but I take it the basic idea is not in serious doubt.

  [2] Proprioception is the prime example of this: the organism knows from the inside what the disposition of its body is. I wouldn’t be surprised if this was the very first sense to evolve, vital as it is.

  [3] I could characterize this essay as Descartes meets Darwin: Descartes stressed the epistemic uniqueness of consciousness—its connection to certainty—while Darwin insisted that everything in the biological world is a product of survival-driven evolution. My proposal is that Cartesian certainty regarding consciousness plays a role in enabling the organism to keep its body in good physiological shape. In one good sense this is a type of naturalismabout consciousness. Certainly we need to pay special attention to forms of consciousness other than the human if we are to understand its biological roots. Of course, nothing in what I say is intended to rule out all sorts of elaborations and convolutions in the human case.   

Share

Mind and Behavior

 

 

Mind and Behavior

 

Philosophical behaviorism has a curious reputation: on the one hand, it can seem eminently reasonable, on the other, completely wrong. Thus it is natural to be uncomfortably ambivalent about it, veering from acceptance to rejection as the mood strikes. On the side of rejection we have the inverted spectrum, behaving zombies, paralyzed conscious subjects, and basic repugnance at the idea that phenomenology could be reducible to bodily movement. On the side of acceptance we have clear connections between mind and behavior, the problem of other minds, the functional necessity for bodily deeds, and the evident plausibility of functional definitions. Behaviorism doesn’t strike us as just completely wrongheaded: mind and behavior are intimately related. Watson, Carnap, Ryle and Wittgenstein don’t seem to be barking preposterously up the wrong tree. Isn’t it simply true that pain is a state that mediates between harmful stimuli and adaptive responses? Isn’t belief precisely a state that is typically caused by perceptual stimuli and leads to utterance and other action? Isn’t desire what inclines organisms to behave in certain ways, as when an organism drinks when thirsty? Isn’t bravery a trait that leads to courageous action? Maybe it is true that we can envisage color inversion combined with behavioral equivalence, but surely it is not true that mental states can be freely combined with just any behavioral profile—you can’t have pains that are functionally equivalent to beliefs, or beliefs that function just like desires. It seems to be part of the essence of a given mental state that it operates in a certain way; the connection is not just contingent and adventitious. Mental states can’t just swap functional roles ad libitum. Mind and body are not stuck arbitrarily together; they are made for each other. Isn’t behavior the point of having a mind? Isn’t behavior how we know what someone else thinks and feels?

            So we have conflicting intuitions: viewed from the inside, it can seem that behavior is just a dispensable extra; viewed from the outside, behavior looks like the whole story. We seem to be forced either into accepting that phenomenology and behavior are completely separable or that the former reduces to the latter. Neither alternative is attractive. Both sides in the debate seem to have a point, but pain (say) can’t be both behavioral and non-behavioral—behaviorism can’t be both true and false! Or can it? Aren’t we making an assumption here, namely that the ontology of the mental is essentially simple? That is, we are assuming that pain is a simple property, a one-dimensional state, a unitary phenomenon. But what about the idea that pain is actually a composite state, a combination of two (or more) components? What if mental states have a dual nature? They are combinations of a felt quality (a phenomenology) and a behavioral disposition (a bodily expression). When we respond to the first component we see the point of asserting the non-behavioral character of the mind; when we focus on the second component we respect the evident link to bodily behavior. Pain really is behavioral—partially; and it is also non-behavioral—partially. Pain isn’t a simple one-dimensional affair; it is a kind of compound or assembly. Compare meaning: we have grown accustomed to thinking of meaning multi-dimensionally (sense, reference, force, tone); and now we are pondering whether the same might be true of the mind generally. Is the ontology of the mind inherently plural, composite, and componential? Every mental state is really a complex of constitutive elements—a construction from distinct components.  [1] To put it simply, pain is made of a phenomenological component and a behavioral component. Thus we are given space to accept that mental states are partially behavioral: behaviorism is partially true. It is completely true of part of the mind. But the part it is true of doesn’t exhaust the whole nature of the mind; there is a part of the mind that is not behaviorally definable. So philosophical behaviorism is both true and false—but not of the same simple unanalyzable property. It is true of one component of the mind, but false of another component. The components can be pulled apart, at least to some extent–as with the inverted spectrum, zombies, and paralysis–but that only shows how compound mental states are: it doesn’t show that the mind is wholly non-behavioral. This is why these kinds of thought experiment affect us strangely: we are invited to detach a component from our ordinary concept while leaving the other component intact, but we find ourselves unsure whether we have enough left to ground that concept–hence the ambivalence. It’s like detaching reference and leaving sense, or detaching sense and leaving reference: sure, something semantic is left, but it seems to fall short of the genuine article—as if meaning has been dismantled and dismembered. The solution to these quandaries is to recognize that mental states are a congeries, a juxtaposition of elements, a duality. Accordingly, it is possible to be a behaviorist about one aspect of the mind—that is, to accept that an aspect of the mind is essentially and intrinsically bound up with behavior. Pain really is (in part) a disposition or tendency to respond in a certain way (viz. avoidance) to harmful stimuli, as belief really is a state that prompts certain kinds of behavior. Or perhaps we do better to say that functionalism is partly true—allowing that functionalism improves on classical behaviorism in familiar ways. Mental states essentially interact with other mental states in concert with external inputs to generate behavioral outputs. But they also have qualities that transcend such functional features, being hybrid entities. There is nothing materialist about this conception of the mind; there is no such metaphysical agenda. We are simply seeking descriptive adequacy. We are trying to do justice to the range of intuitions that cluster around this topic. We are explaining how it is possible to be a card-carrying behaviorist (functionalist) without eliminating the essential nature of the mind. The mind is a phenomenological-behavioral compound. The mistake was to presuppose an ontology of simple properties capable of only a single analysis—the analogue of pre-Fregean views of meaning. We need to acknowledge a more fine-grained and variegated ontological structure to the mind.  [2]

            Having distinguished the two components of mental states we can ask which component preponderates in a given case. It seems intuitively correct to report that sensations have a larger phenomenological component than beliefs (and certainly traits of character): thus we can envisage inverted spectrum cases with relative ease, but we can’t easily envisage exchanging beliefs and preserving functional role. So we should leave open the possibility that some types of mental state are more behavioral than others: some are more a matter of qualia (e.g. color sensations) and some more a matter of abilities to act, dispositions to behave, and competences to perform (e.g. belief, linguistic understanding, and character traits). You can in principle be very behaviorist about some things and only slightly behaviorist about others, according to the magnitude of the behavioral component. Why there should be such variations of magnitude is no doubt an interesting question, and one that could profitably engage the attentions of a researcher who has seen the merits of the dual component conception. I rather think it has to do with the fact that experience is low on the behavioral component but knowledge is high on it: in knowing the deed dominates, but in experiencing the feeling does. In the beginning was the deed, as some like to say, but the deed is only part of the story; the feeling is also a leading character, sometimes eclipsing the deed. At any rate, the mind is a feeling-deed combo.  [3]

 

  [1] I leave aside the question of whether there is a third component corresponding to the neural correlates of the mental state, but this possibility is certainly worth exploring: see my “A Triple Aspect Theory”.

  [2] Motto: things are often more complicated than we initially suppose (duh).

  [3] As to the problem of other minds, we can venture the following: given that mental states are partly constituted by behavioral facts, we are in a good position to know that other people (and animals) have part of a mental state—for example, we can know that an organism has one component of pain (the part that consists in behavioral facts). Thus we have partial knowledge of other minds even if we don’t have full knowledge. This might explain our sense that the mind is not quite as elusive as some philosophers suppose—those that identify the mind exclusively with the inner phenomenological component. There is something to Wittgenstein’s insistence that the mind is visible in behavior, even though it is not plausible to think that all of mind is so visible. The mind is not entirely private, but it is not entirely public either: it has a foot in both camps. The concept of mind is the concept of a pairing of elements, neither exclusively inner nor exclusively outer. By rough analogy, it is like the concept of knowledge—a pairing of both belief and truth; or wide content and narrow content, or character and content, or connotation and denotation—all cases of duality within apparent unity. 

Share

Religion as Science

 

 

Religion as Science

 

I wish to put forward an unfashionable and provokingly simple point of view: religion is just outmoded science. Religion is the science of an earlier age, yet still clinging on in some places. I don’t just mean the cosmological parts; I also include ethics. That is, religion consists of natural science and moral science—a set of theories about the natural world and a theory of morality. According to religious science, God is the creator of the natural world andhe provides the foundation of morality (as in divine command theory). He created the universe of stars and planets as well as creating all the animals on earth; he also brought right and wrong into existence. The moral laws are God’s laws, as are the natural laws. Above all, he designed and created human beings, with their characteristic nature and moral sense. In the early days it was supposed that many gods rule the universe, each responsible for a certain part of it (the sea, the wind, love, etc.). The essential point here is that these were postulates offered to answer explanatory questions. At some point humans evolved the ability to ask questions (other animals don’t seem to do this)—of each other and of nature. We asked why the sun rises, what it is, where animals come from, etc. The realm of gods, spirits, angels, demons, and so on, was the human attempt to answer such explanatory questions. It wasn’t such a bad attempt: it provided some explanatory insight into things, a semblance of understanding. The attempt went beyond immediate observation, postulating entities (“theoretical” entities) that might be causes of what we observe. Likewise, religious ethics provided a foundation for moral truth: right and wrong are seen as divine edicts. Prior to these postulations, in the dim dark prehistory of human existence, before the ability to ask questions had evolved, these theories had occurred to no one. But once this way of thinking came into existence it took hold of the mind of man: it was the best science available at the time. It was taught and inculcated; it became orthodox. A profession of experts (“priests”) grew up to promulgate this scientific worldview: they became the authorities on the science of the day. Their methods could be obscure, but they were the only show in town—and they had a coherent story to tell (with some fancy vocabulary thrown in). They were humanity’s first theoreticians. 

            What happened later is that a new approach to science was developed, emphasizing systematic observation and experiment. This superseded the old science. It was still trying to answer explanatory questions—the hallmark of science—but it employed a new method. The priest scientists began to seem shabby by comparison, their theories weak and unfounded. Their science just wasn’t very good.  [1] Socrates exploded their view of morality (in the Euthyphro argument), and a succession of natural philosophers undermined their cosmological opinions. Their centers of learning began to seem like centers of ignorance, and new institutions called universities began to appear.  There had been a revolution in science—a massive paradigm shift—away from religious conceptions and towards modern secular science. The science accepted for thousands of years gave way to a new science using a new method—though still cleaving to the questions-and-answers model. It wasn’t that science (“natural philosophy”) began with the Enlightenment—it had been around for millennia—but it made a decisive step forward; and the old science went by the wayside (or is still in the process of going by the wayside). Greek polytheism went by the wayside after a long period of hegemony; now Christian monotheism (and other religions like it) also went by the wayside.  [2] Their theories were falsified, or at least cast into serious doubt: what once seemed scientifically solid was exposed as so much superstitious thinking. It may have seemed reasonable then, but it was reasonable no longer. The entire religious approach to science had to be discarded, and it largely was, in relatively short order. Copernicus, Galileo, Descartes, Newton, and Darwin—they all showed that the old science was wrong science. It wasn’t doing something else: it was doing the same thing, but not doing it very well.

            The question that interests me in all this is whether the same thing could happen to science as we know it today. Could the general orientation of modern science be fundamentally in error, as old-style religious science was (though not disgracefully so)? Might there come a time when it will be superseded in favor of something better? This might have seemed inconceivable to the old religious scientists, and it no doubt seems hard to believe for us now, but is it a real possibility? Is there a viewpoint from which our current science might seem not only partial but also deeply misguided? Surely the question is not to be dismissed a priori. Ironically, it is the very emphasis on observation that might prove its downfall, at least as a comprehensive account of the universe. That is, it is the empiricism of science that may undermine its ability to give a complete account of things—as it was the supernaturalism of religious science that undermined it. Why do I say that—isn’t the empiricism of modern science its chief engine of success? Yes, but any method, however successful, is apt to have a downside. The obvious point of weakness is that the human senses are partial, finite, biased, and subjective: how can they be the basis of objective knowledge of the whole universe—all of Being? At least under the religious approach the supposed basis of knowledge—divine revelation–is not thus limited, since God is not epistemically bounded. In principle, access to God’s knowledge will give humans complete knowledge of what there is, unlike the puny human senses. In practice, of course, science has moved further and further away from the deliverances of the senses, erecting a magnificent inferential superstructure, which dilutes its vaunted empiricism considerably. Only thus has modern empiricist science succeeded in establishing the system of knowledge that now constitutes it. But there is still the possibility that some areas of reality are closed off from this mode of knowledge acquisition, and always will be. The list of such possible possibilities is long and familiar: the origin of the universe, the origin of life, the quantum world, deep space, the nature of time, consciousness, human freedom, the ultimate nature of matter, etc. These questions have not yet succumbed to the empirical method, the method that defines modern science. Ethics is not susceptible to the empirical method at all.  [3] Our science-forming faculty, as Chomsky styles it, construed as the method of empirical science, seems not cut out for certain problems and questions. Like all methods it has its limitations and blind spots. Maybe the future holds the prospect of a synthesis of human and artificial knowledge systems, which might make our current scientific method look feeble by comparison.  [4] Who knows? Just as modern science superseded religious science, despite centuries of dominance, so some future science might supersede what we have now—admirable as that has been during the relevant phase of human evolution. We need to take a wider perspective, seeing our current science as just one phase of a much longer history. It’s hard to see how our current science could come to be seen as completely wrong, but likewise not everything about religious science was completely wrong—just the main postulates. A lot of good science was done under religious assumptions, and the search for systematic theoretical principles traces back to religious conceptions. We shouldn’t be too caught up in our particular epoch with its relatively local disagreements; from a loftier perspective religious and secular science lie on a single trajectory, with our current methods a possibly replaceable temporary phase. We tend to think that religious science was bad science because it was replaced by secular science, but future science might make ourempirical science look bad in retrospect. And if we never actually reach that lofty perspective, that won’t necessarily be because it doesn’t exist. If we suppose that science came into existence when the human ability to ask questions evolved, maybe when human language evolved (about 200,000 years ago), beginning with religious science, then we might still be in an early phase of its development, with untold possibilities ahead of us. At any rate, the history of science is best seen as a continuous thread tracing back to supernatural conceptions; the scientific spirit did not suddenly begin in the seventeenth century in Western Europe. That is a parochial position: science is the systematic attempt to understand the universe; its methods are just means to that end. Empirical observation is one small part of this story.  [5]

 

 

  [1] Berkeley is an interesting transitional figure because his cosmology is resolutely theistic yet he incorporates modern science. He doesn’t endorse out-of-date theories of nature, accepting modern theories, but he places them in a religious context: his is a form of theism without religious science to encumber it. But from a wider point of view his cosmological science is firmly religious: God plays the role of a theoretical entity responsible for all existence. This is theistic science without its usual dogmas.

  [2] I can’t help noticing that polytheism and monotheism have analogues in the natural sciences: from an ontology of several basic elements (earth, fire, water, and air) we move to a monistic ontology comprising only uniform atoms in the void, i.e. matter in general. We go from many gods to a single God, and from many forms of material substance to a single material substance.

  [3] When science adopted an empiricist conception of itself ethics became split off from mainstream science, which it had not been hitherto, thus precipitating all sorts of deformations in moral thought. Before that religious science was not so sharply distinguished from ethics: a natural philosopher would naturally also be a natural ethicist (not a supernatural one).

  [4] The contribution of scientific instruments to scientific knowledge is, of course, massive—the microscope, the telescope—and without this enhancement of the senses modern science would scarcely be possible. The advent of further brain augmentation, possibly by direct implants, is not to be underestimated. Here too we can discern a continuous path in the history of science. The brain awaits the installation of its game-changing inner instruments.

  [5] Rationalism has always opposed the alleged ability of empiricism to provide a complete account of human knowledge, though there doubtless must be an empirical component. The intellect must make a substantial contribution to knowledge (whatever exactly the intellect is).

Share