Intelligence Assessment

Intelligence Assessment

There is something I used to do routinely that I don’t do anymore: assess other people’s intelligence. As a professor, it’s part of the job—forming opinions about other people’s intellectual abilities. Sometimes it seems like the main part of the job. It comes in many forms: grading, admissions, letters of recommendation, job interviews, promotions, tenure decisions, book reviews, refereeing, replies to critics, question periods, random conversations. I did it all the time; it was part of my daily life. Now I don’t do it anymore. And I feel better for it.  I don’t think it’s good for a person—all that judging, scrutinizing, criticizing. It’s a lot of responsibility and it casts a pall over the proceedings: fear, suspicion, the constant need to impress. It also inevitably makes you think too much about your own intelligence—am I good enough, do people approve of me? Admittedly, I haven’t done much of that in the last thirty years, but as a young man it was hard to avoid. It’s a relief not to have to bother with it anymore. It isn’t that I don’t evaluate people’s minds anymore (you wouldn’t believe what I actually think of people), but it isn’t so morally important—you can make or break people. And it’s not easy—it’s dirty work but some poor sod has to do it. Because intelligence is hard to judge, which is why psychologists have such a hard time with the subject. I’ve seen a lot of bad judgment in my time—and confident bad judgment. It creates a nasty atmosphere. A philosophy department is like a cauldron of insecurity, paranoia, and outright terror. No wonder philosophers are so awful! And yet it is unavoidable. Still, it’s good to be aware of it, so that its excesses can be detected and curbed. Curb your lack of enthusiasm. Don’t be overwhelmed by being underwhelmed. Take it easy, for pity’s sake. I have had to do far more criticizing than I would have liked, and I’m glad to be rid of it. It’s not fun, it’s not healthy. Constant criticism is a downer. It would be so much nicer to have nothing but praise for everyone.

I remember, in particular, job interviews. I had a carefully crafted methodology for them, which perplexed my colleagues. I would ask difficult questions, obnoxious questions, and stupid questions—on purpose. The stupid questions were intended to simulate what the interviewee might encounter from students or know-nothings—to see how they would handle that kind of situation. These questions could be very telling. Obnoxious questions were intended to reveal how the candidate would deal with a common type of objector—would they be fazed and flabbergasted, or calm and cool? Of course, my colleagues thought I was being obnoxious just to derail the candidate, rather than to give him or her the chance to shine. But my difficult questions were the most cunning: how would the candidate react to a telling criticism or a deep problem? Here again, the idea was to give the person a chance to show their real philosophical worth, as opposed to trotting out routine answers. When they struggled, intelligently struggled, that was to their credit; they could see the problem and refused to give a facile answer. My colleagues thought I was trying to nail the person, discredit them, eliminate them. The exact opposite was my intention, and again this kind of exchange often brought out the best (or the worst) in them. It’s not simple, this intelligence assessment lark: bland and routine is not going to cut it.

It takes a lot of intelligence to be an intelligence assessor. It takes experience and practice, self-awareness, sympathy, a readiness to use the scalpel or bludgeon. Nor is it easy to hold up under such assessment—I felt for these people, I really did. I was there myself once upon a time, trembling, dry-mouthed, trying to keep calm. It’s the dirty little secret of academic life, seldom spoken of, frequently felt. I think a workshop on intelligence assessment would not be a bad idea, especially for people new to the job.[1]

[1] The other smutty secret is the horror of grading. Is there anything worse than trying to decide between a B and a B minus? Don’t you just hate to compile a grade sheet? It’s bad enough having to say that someone is just not that bright, but to have to put a number on it! The whole system of grades is a table of torture for all concerned.

Share

On the Origin of Heavenly Bodies

On the Origin of Heavenly Bodies

The main thesis of Charles Darwin’s On the Origin of Species is that species evolve from other species. They arise from natural variations found within a given species that are selected for or against. They do not arise spontaneously and independently, or by dint of divine or extraterrestrial intervention. One species derives from another species, going back indefinitely (but finitely). The process is mindless and purposeless, a matter of natural law and physical mechanisms (mutation and natural selection). The intermediate forms may be missing, but they once existed; the transition was gradual not abrupt, and certainly not magical. The existence of a range of animal species is thus scientifically intelligible not inexplicable. But Darwin does not apply this perspective to the existence of heavenly bodies—planets, moons, stars, meteors, galaxies, galaxy clusters. Asked what their origin is, Darwin ventures no answer, no doubt because not much was known about it in those days. Yet the questions are remarkably similar: what is the origin of these enigmatic entities, these astronomical natural kinds, these celestial species? Do they arise spontaneously and independent of each other, by divine action or alien super-scientist, or do they derive from earlier entities of the same basic type by intelligible means? The answer, as we now know, is that they arise in the latter way: moons arise (or can arise) from planets, planets arise out of free-floating debris from stars and other objects, stars arise from condensed dust clouds, galaxies arise from stars, galaxy clusters arise from galaxies. The mechanism is the law of gravity, condensing bits of matter into other bits of matter, sometimes producing extremely hot and dense objects. These things are dependent for their existence on other similar things and develop from them over time; they were not created separately. For example, the earth was caused to exist by antecedent chunks of matter swirling around in space, not by God ab initio. No intelligence went into its creation, as no intelligence went into the creation of animal species. It would therefore be possible to write a book about heavenly bodies just like Darwin’s book about animal species; indeed, a single book could naturally cover both topics. The questions and answers are much the same (though clearly not identical). The book could be called On the Origin of Natural Kinds, and it could include animals and plants, stars and planets, rocks and minerals, chemical elements and geological formations. It would be a book of cosmic history, its main thesis being that stuff comes from other stuff of the same basic kind by explicable natural processes. The universe obeys a law of homogeneous gestation—like from like, going back in time to some primordial event (the big bang, the origin of all life). We might even say that Darwin discovered this mode of explanation, applicable to a wide variety of natural objects, though he didn’t himself extend it beyond the biological sphere. He worked out the Special Theory of Generativity, applying it to the specific case of life forms; but he didn’t propose a General Theory of Generativity, applying it to the universe as a whole. But he could have, he could have. He could have offered a theory of all species (kinds, sorts): animal, celestial, geological, chemical, physical, even mental—every existing natural kind. Each of these would no doubt introduce different mechanisms of progression (e.g., nuclear fusion), but the general form of the theory would be common to multiple areas. Then he would truly be a towering figure in the history of science—he would have explained the lot.

Or would he? For surely there are some existing entities that are not explicable in the manner suggested—those that are created by intelligent minds. These include works of art, items of technology, buildings, and social systems. In each of these areas there is an indispensable role for intentional creation by intelligent agents—artists, scientists, architects, political theorists. In these cases, it would be quite wrong to postulate mindless creation—we can all see that such entities are brought into being with the aid of an intelligent mind. Here we all believe in “Intelligent Design” and “Creationism”. We all think that the Mona Lisa was created by a certain mind at a certain time—not by mere physical laws (as if the paint just happened to come together in these ways). But this is not necessarily so—the means of gestation might not be so obvious. Imagine a planet on which all manner of artifacts abound but their creators have all disappeared for some reason (a pandemic, say). You might observe these objects and wonder how they came to be (you just beamed down to the planet’s surface). Some of the more rigidly Darwinian members of the landing party might insist that it must be by some sort of non-intelligent process, because no one has ever seen the putative creators; but of course, this is simply because they have all disappeared since they did their creative work. Here the Darwinian style of explanation would be completely mistaken—some watches are made by watchmakers! The correct theory (enunciated by a spindly character named Spock) is precisely that the erstwhile creators have been wiped off the face of the planet without leaving a trace: that is the only logical explanation, given the similarity to our own artifacts. A book called On the Origin of Works of Art contending that all such works result from mere physical laws, with no intelligent creator in sight, would not meet with much acceptance. In point of fact, Darwin’s own title is somewhat misleading, since some species are the result of Intelligent Design—e.g., those dog breeds we see around us all the time. Species differences can and do arise by virtue of choice and forethought; in fact, a whole planet could be so populated. It is just that most species do not actually arise in this way on planet earth. There is nothing necessary or a priori about any of this. It’s just empirical science. Darwin’s theory could have been wrong, but actually it isn’t.

Minds present an interesting case. What is their origin? The answer is that minds also come from other minds, as things actually are, though not comprehensively so. For there is such a thing as learning. Animal minds, like animal bodies, result from mutation and natural selection—the minds that survive are the minds that serve the genes best. The human mind derives from the ape mind, going back to the first minds on Earth; it isn’t as if each species has a mind that exists without reliance on prior minds. The basic structure of the human mind derives from the structure of earlier minds, modified in the usual ways. A book called On the Origin of Mindswould be very similar to Darwin’s book: later minds descend from earlier minds; they don’t arise spontaneously and independent of other minds. With this exception: minds can be changed in the course of an individual life by the process we call learning. Not all knowledge arises by genetically copying ancestors’ minds; some of it arises by intentional action, e.g., by the scientific method. But it is equally true that not all aspects of the body derive from earlier bodies, since a given species might have characteristics not found in any earlier species—such as geographical location or freedom from certain diseases. Species don’t always stay in the same place as their progenitors, or always suffer from the same diseases. Not everything about a species reflects its origin in the species it came from; some comes from the currently obtaining environment—like learning. In any case, none of this requires any relaxation of the basic principle that minds (and species) owe their origin to other minds (species) and not to creative acts by supposed deities or super-scientists. Intelligence is not created by intelligence in the style of Creationism, but by the same processes that produce bodies.[1]

The steady state theory of the universe gave way to the dynamic big-bang theory. The immutable species theory gave way to the dynamic evolutionary theory. The universe started life as a cloud of dust and developed into a differentiated assembly of natural kinds of celestial object in the fullness of time; the former seems an unpromising foundation for the latter, but gravitational attraction is a powerful force. Life on earth started as a sea of uniform bacteria and developed into a differentiated assembly of animal species in the fullness of time; the former seems like an unpromising foundation for the latter, but natural selection is a powerful force. The history of planets is written into their structure. The history of species is written into their structure. At no point do we need to introduce a form of guiding intelligence to explain these transitions and end-points. The analogies between the two areas are clear and instructive. Astronomy did what biology had already done: replace one historical world-view with another. It is curious that these links are not explicitly drawn: Darwinian biology is a special case of evolutionary cosmology, viewed broadly. Gradual evolution from one thing to another, not sudden creation from nothing—lawfully related causal sequence, not non-natural fixity. Natural history, not supernatural non-history. Darwin in effect anticipated modern cosmology, but didn’t draw the connection. No discredit in that, but historically interesting. He also said nothing about the destiny of species—what their future will be as opposed to their past. But we can easily fill that gap (with due allowance made for the uncertainties of the future): the future will resemble the past, though it will not go on forever. Species will keep arising from other species by the mechanisms that have operated hitherto (God will not suddenly pop into the picture, smiling and winking). Extinctions will continue to happen. We can anticipate that more species will result from intelligent intervention (or stupid intervention), as we seek to improve the human condition—e.g., meaty-tasting plant life. Who knows what will happen with AI. Stars will continue to be minted, then fizzle out and die. Entropy will have its way with the universe. The Sun will eventually grow cold. Life and the physical universe are everchanging things not static givens. That is one of the great lessons of Darwin’s great book: nature is not a timeless immutable; things come and go (dinosaurs, stars). The physical universe is more like life than we thought, more changeable, less carved in stone; and life is more like the physical universe than we thought, more mechanical, less anthropic. Their origin stories have much the same plot.[2]

[1] That is not to say that we know how this is done (we don’t). I find it a rather chastening thought that my mind owes its existence to the minds of countless ancestor minds, some none too brilliant. My mind genes carry the trace of mind genes stretching back to our aquatic days.

[2] A problem that particularly exercised Darwin is what might be called the “dispersal problem”: if species derive from an ancestor species, how come they are often found at a considerable distance from their origins? The answer, he suggested, is that forces of nature carry organisms far and wide—the wind, tides, birds. A similar problem arises in cosmology: if packets of matter come from other packets of matter, why are they so widely dispersed? Shouldn’t the packets be closer together? The answer is that forces of nature drive them apart, notably the explosive force of the big bang—hence cosmic expansion. The universe is a natural wanderer, as are animals. Things are born from other things, then they wander, then they die—that’s the basic story.

Share

Art Miami

Art Miami

I went to the annual Art Miami festival yesterday with my friend and tennis partner Eddy. The people were as interesting to look at as the art; indeed, they were works of art themselves. Obviously from the art world, they knew how to put on a good sartorial show. What particularly caught my attention were the shoes: virtually everyone was wearing carefully chosen, artistic, expensive shoes. Clearly, arty people pay a lot of attention to their foot clothing, rightly so in my opinion. I myself was wearing impeccable vintage Puma sneakers, precisely because of where I was going (Eddy wasn’t so fastidious). There was a lot of checking each other out—the women were especially aesthetic, self-consciously so. Art begins at home, on the body. These visually gifted people knew that Eddy and I were not of their world—they probably thought, correctly, we were a couple of tennis players. Heaven forbid they thought I looked like a philosopher! Anyway, I got an education in contemporary shoe art—along with the stuff on the walls. It really should be called Shoe Miami.

Share

A New Law of Biology

A New Law of Biology

I believe I have discovered a new law of biology. I call it “the law of differential adaptation”. It is fairly easily derivable from established principles, but I have not seen it enunciated before. It strikes me as illuminating. We begin by making a distinction: between the animate environment and the inanimate environment. A given organism is adapted to both, and there is a history to these adaptations. The animate environment consists of all the impinging life forms that an organism is subject to, particularly predators, rivals, and diseases. The organism has defenses against these life-threatening factors—ways of surviving in their presence. These ways include things like legs, antlers, and immune systems—for running, fighting, and disease avoidance. The inanimate environment consists of all the non-biological factors that govern an organism’s life: space, time, matter, gravity, climate, oceans, mountains, volcanos, rocks. This is the physical environment as opposed to the biological (and psychological) environment. Now, these two environments show a marked difference: one evolves slowly, if at all, while the other evolves quickly (relatively speaking). The physical make-up of the planet is pretty stable over time, give or take the odd ice age, meteor impact, or volcanic eruption. Once a species is adapted to it its work is done: it doesn’t change, so the organism doesn’t need to. When there is a large change, extinction is likely, because not adapted to. But generally speaking, the rate of adaptive change is slow and equilibrium is readily achieved (judged by evolutionary time). For all intents and purposes, when a species becomes adapted to space, time, and solid objects it has done all the adapting it needs to do—because these things don’t change. They are physical universals (consider the Sun). But it is different with the biological environment—it keeps changing. It evolves rapidly and unpredictably: the biological world evolves differently from the physical world. It does so because of the action of mutation and natural selection. Consider the arms race between predators and prey: each evolves in response to the other—the faster the prey, the faster the predator needs to be, and vice versa. Thus, there has been a lot of change in the biological world since organic evolution began—species come and go, natural selection never ceases, there is a continual biological interaction spurring change. It should now be clear, then, what the new law says: every organism is a locus of slow and fast adaptation—to the fixed physical environment and the changing biological environment. Evolution is double track—a slow track and a fast track. Some characteristics of the environment have been constant over evolutionary time while others have varied. Adaptation to solidity has remained the same but adaptation to predators and pathogens has varied. There has been trait stability and trait plasticity. Two different evolutionary processes have been at work, according as the organism interacts with a constant physical environment and a constantly changing biological environment. For example, lung design hasn’t changed much because the atmosphere is a physical constant, relatively speaking, but escape strategies have evolved quite a bit in response to predator prowess. Just to be concrete, let’s say the rate of adaptation to these two categories is a hundred times faster for the one than the other. Little evolutionary adaptation to the physical environment and much evolutionary adaptation to the biological environment. Natural selection by the biological environment is a hundred times greater than natural selection by the physical environment. The former triggers a lot of adaptive change, the latter not so much. Hence, differential adaptation.[1]

This distinction helps deal with a puzzle: why is it that organisms die a lot from predators and pathogens but not much from purely physical accidents? Have they lagged behind in respect of the former but not the latter, adaptively speaking? How often do fish die from lack of water as opposed to the predatory actions of other organisms? How often do birds die from midair collisions as opposed to wily predators? How often do land-dwelling creatures die from falling off a cliff as opposed to catching a fatal disease? They all seem very good at avoiding physical accidents but not so good at avoiding their biological enemies—rather maladaptive at this. Why can’t they do better? The answer is that the enemy keeps getting one step ahead of them by means of natural biological change, while they struggle to keep up (e.g., viruses). The lion gets faster and more agile just when the antelope begins to outrun it. An animal can be extremely well adapted to avoiding an earlier iteration of a predator, but now it is faced with a new and improved model. It seems as if it has been lazy in the adaptive department, but really it has been working hard to keep up. The appearance of ineptitude is deceptive: actually, the animal is more adapted (fine-tuned) to the biological environment than the physical environment—it is just that the physical environment is relatively static. Adaptations to it are more primitive than in the biological case. The most advanced technology is deployed in the case of adaptation to the biological world; it just looks as if the animal is ineffective in this domain. Thus, animals die of disease more often than from falling over, because the laws of physics stay fixed while microbes keep changing.[2] The physical world is a steady target, but the biological world is a constantly moving target. The pace of evolutionary change would be much less if organisms only had to adapt to the physical environment. But the biological environment is much more dynamic, changeable, challenging to old ways. It is misleading to talk of adaptive change in relation to “the environment” because really the mechanisms at work are quite different for the two cases: there is physically driven adaptation and biologically driven adaptation. The genes for the former have been around a lot longer than the genes for the latter. It is the difference between the tried and true and the continually updated.

Consider the genetic book of the dead, or the phenotypic movie of the past—that is, the records within the organism of its evolutionary history. Some of these recordings depict an unvarying static history but some depict a frantic history full of rapid change. Four limbs have been around forever, but not the peacock’s tail or the owl’s eyesight. Breathing is as old as the hills, but not antler-locking. The movie would be monotonous when it comes to moving on four legs, but it would become action-packed when dealing with predator-prey interactions. Peace is less eventful than arms races. Animals have more to fear from other animals than from rocks and cliffs. The battle for survival is fought more against other animals than chunks of the inanimate world. Thus, there is a deep difference between coexisting with the physical world and coexisting with the biological world—a quantitative difference. The quantity of adaptation to the biological world is much greater (a hundred times greater, let’s suppose) than the quantity of adaptation to the physical world. The biological world acts to accelerate the rate of adaptive change; the physical world tends to produce a constant state of motion (rest). Once the organism is up and running in a physical environment, it doesn’t have any real motive for upgrading its abilities; but an organism that exists in a world of other organisms (predators, rivals, pathogens, parasites) is playing a different kind of game altogether—it has to keep changing or else it is in danger of being done in by other organisms.[3] Hence the law of differential adaptation.

Permit me to talk briefly about the not-so-brief history of the universe. Cosmologists speak of the evolution of the universe, and the word is not out of place. This evolution is slow by any standards, driven by the unvarying laws of physics (mainly gravity). But in one tiny corner of the universe (as far as we know) it has undergone a remarkably rapid evolution—I mean life on earth. Suddenly life began and developed rapidly, by means of mutation and natural selection. New types of entity came into existence overnight (i.e., several millions of years). Then evolution began to pick up the pace big-time: living things started to interact with each other in myriad ways. In a fraction of a second whole species evolved, only to lapse as rapidly into extinction. In the blink of an eye dinosaurs came and went. So, there were three stages of cosmic evolution: first, the physical evolution of the universe; then, its evolution into life on earth; and then, the co-evolution of living things. Evolution accelerated over this time period (14 billion years) achieving speed-of-light evolution in the last billion or so years. There have been three epochs of cosmic evolution, two of them concerning life. I have been suggesting that we carve up organic evolution into two periods or types, corresponding to organism-inanimate evolution and organism-animate evolution. So, there ought to be a further law: the law of differential evolution. This law states that the rate of evolution varies according to whether it is purely physical, organic-physical, or organic-organic. Physical evolution is slow, organic-physical evolution is fast, and organic-organic evolution is super-fast. Evolution in general thus falls into three phases that can be usefully distinguished; it is more fine-grained than we might have supposed. There are varieties of evolution. In fact, evolution evolves in that its nature changes over time. The early post-big-bang phase was relatively primitive and sluggish; the initial life-on-earth phase was more advanced and a lot slicker; the most recent phase really got into its stride and produced cosmic marvels not seen before (lions, humans, etc.). These are the natural kinds of process into which the overall cosmic evolutionary process divides—the evolution of the entire universe, from the very large to the very small. Call it three-fold evolution. We might be in for a fourth phase before too long, as our machines start to evolve on their own account. Then we might get within-a-lifetime revolutions, as machines beget machines.[4]

[1] Consider so-called artificial selection, say of dog breeds. The biological environment of dogs (initially wolves) includes human dog breeders; they have caused an enormous amount of variation in dogs. The changes, genetic and phenotypic, have been extremely rapid. But the changes wrought by the physical environment of dogs have been minimal to non-existent, because it hasn’t changed, or very minimally. The difference between the two sorts of adaptation is conspicuous.

[2] Imagine if gravity were to change its force from time to time: animals would have a devil of a time adapting to its fluctuations and would no doubt die from it (and hence not reproduce) with greater frequency than now. That would be the physical equivalent of predator transformation: stronger gravity, faster predator.

[3] If the physical world evolved by something analogous to mutation and natural selection, then the difference would disappear, or be reduced. For then, it would embed a mechanism of change that lifts it above physical law, causing it to change its nature over time (of course, this is physically impossible). The biological world is inherently a lot more changeable than the physical world because of this mechanism.

[4] Obviously, AI will be crucial: it might start designing and making machines and organisms hitherto undreamt of, capable of producing yet other machines and organisms. Then we will need a fourth evolutionary category—true artificial selection (machines being not part of the biological world). The pace of this evolution might be measured in seconds not millennia and decades—as machines turn out new machines at a dizzying rate.

Share

Real Americans

Real Americans

Several years ago, I was driving down I95 with my wife, Cathy. At some point I found it necessary to change lanes and moved into the outermost lane. This caused an incoming car to slow down a bit—it was moving pretty fast. I then went back to my original lane. Nothing very remarkable—happens all the time. As the car passed me, I glanced over: a man and his wife, white, middle-aged, ordinary, both gave me and my wife the middle finger with a look on their face of sheer hatred. I had, they thought, “cut them off”. This warranted anger of a high degree—I swear they would be quite happy to kill us both because of what had happened. I looked back at them with puzzlement and disbelief—all this over that. American hysteria, nastiness, violence—perfectly normal for these two proud Americans. I think about this episode often: so extreme, so theatrical, so mindless. A man and his wife consciously and collaboratively decided to be as aggressive as they could be because another car caused them to slow down a bit. My own experiences with Americans (often professors) have been not so far away from that, not to mention the state of American politics. It’s insane, psychopathic, terrifying, disgusting. It’s America.

Share

Freud Generalized

Freud Generalized

Freud’s psychoanalytic system was built around the idea of sexual repression. Sexual taboos expressed as societal pressures lead to the repression of the sexual instinct, resulting in distinctive psychological consequences. These include: neurosis, sexually charged dreams, dirty jokes, the artistic drive, and a general feeling of malaise. The basic mechanism is the repressive act applied to sexual desires. This leads to sublimation and symbolic release. The libido is distorted and thwarted, affecting mental health. So the story goes. The question I want to ask is whether this theory can be applied to other sorts of instinctive desire—specifically, the desire for food. As things stand, there is not much of a taboo about eating: we can eat what we like when we like with no shame attaching. There are exceptions such as dietary prohibitions of various sorts: kosher food, not eating cows, not eating meat in a vegetarian household. But they are not extreme enough to match the kind of sexual repression Freud was talking about, so I will need to invent a thought experiment. This is not difficult: picture a civilization that enforces many kinds of food prohibition, with shame and punishment as deterrents to indulging one’s natural food preferences. Suppose hot food is prohibited, perhaps for religious reasons, along with apples, oranges, and bananas. No butter on bread, nothing spicy. People desire these things, but it is deemed sinful to even think about them or talk about them. Children are brought up to feel shame about such desires and are punished for indulging them (buttered bread is deemed particularly heinous, the sign of a corrupt soul). Religion gets in on the act, predicting hell for violators. It’s a heavy trip, man. Meanwhile, we can suppose, sex gets a free pass: here you can do whatever you want—promiscuity, masturbation, even a bit of incest. Anything goes—you are actually admired for your sexual “impurity”. There is no sexual repression at all. According to Freudian theory, none of the consequences of sexual repression will apply in this society: no sexual neurosis, no sex dreams, no dirty jokes, no artistic sublimation, lots of erotic happiness. However, again according to Freud, food consumption will be surrounded by the untoward effects of repression, because repression attempts to control instincts that strive for free expression. You badly want butter on your bread, or some hot soup, but your society abhors such culinary sins—you were spanked as a child for breaking these rules and would be despised as an adult if you succumbed to said desires. Consequently, you are a food neurotic, plagued by food dreams, are always telling “dirty” food jokes, and feel pretty lousy in the eating department (all that not getting to eat what you want to eat). You have a seething food unconscious, pressing for release. The basic psychological law that Freud discovered (allegedly) is that repression necessarily leads to such symptoms of self-denial. Desire seeks release (hydraulicly so) and any attempt to thwart it spills over into untoward psychic perturbations. That is just the way the human mind works: it is a psychological law that repressed desire produces the kinds of effects noted. So, repressing food desires will produce the same kinds of effects as repressing sexual desires, should it occur. It doesn’t occur much with us, but it could occur in a possible society; in this society there will be a need for psychotherapists to work on people’s repressed culinary desires. And the same mechanism will work on any natural desire that is thwarted and repressed—even the desire to pursue one’s scholarly interests in peace. You might be publicly shamed for working on the mind-body problem, for example (so you only do it in your dreams in a disguised form). Freud’s theory is not limited to sex but applies to any kind of desire that receives the taboo-repression treatment. How could it not?

How should we respond to this point? One response would be to say, “How interesting, Freud’s theory could apply to the case of food, with no loss of plausibility!” It was only contingently about sex, despite appearances. On another planet, it might be about food, or even scholarly interests. A second response would be to say that there must be a difference between the food case and the sex case, because it doesn’t sound right to extend it to the case of food. How could food lead to such drastic psychic deformations? Wouldn’t food prohibitions just lead to a lot of conscious discontent, not the formation of a culinary unconscious with its attendant psychological ramifications? So, there has to be a difference between the two cases—there has to be something special about sex. Is sex perhaps the stronger desire, the more pressing? (Try telling that to someone who has been fasting for three days). Thirdly, it might be concluded that Freud’s theory has to be wrong about sex precisely because it is clearly wrong about food. The cases are exactly parallel, but surely the Freudian consequences would not obtain in the food case—just a lot of griping and rule violation and black-marketing. I incline to this third view, but we don’t have any solid empirical data, so I must remain agnostic, as a good scientist. Still, I am morally certain culinary Freudianism stands no chance of being true, but ought to be true if Freud were right about sex. That’s not how the mind works.[1]

[1] It might be said that there is a deep difference between the food case and the sex case, namely that sex is inherently shameful while eating is not. Sex should be repressed, but not so eating. Extreme puritans would contend as much. Thus, sexual desire is necessarilyapt for repression, even without societal pressures. We need to suppress our sexual desires or they will devour us, wrecking civilization. The conscious mind cannot bear the weight and fire of human eros, so the formation of a repressed sexual unconscious is entirely natural. That is not a Freudian view, nor the current opinion on such matters; but if there exists a deep acceptance of it in the human psyche (whether true or false), that would explain our asymmetrical attitudes towards sex and food. For it does seem odd that we are so ready to believe the Freudian story about sex but smile at the idea of a food unconscious, or a scholarly unconscious. Sex seemsspecial, but exactly why is unclear. Is there something intrinsically evil about sex, but not about food? Does violence, say, lurk at its heart, or contempt, or competition?

Share

Perceptual Intuition

Perceptual Intuition

Perception and intuition are usually opposed to each other: what is perceived is not intuited and what is intuited is not perceived. The senses perceive and reason (intellect) intuits. We know material objects by perception and abstract objects by intuition. Empiricism declares perception to be the basis of knowledge (and the criterion of existence); rationalism declares reason to be the basis of knowledge (and the measure of reality). Intuition and rationalism go together; perception and empiricism go together. There is a grand dichotomy: intuition doesn’t encroach on perception’s territory, and perception doesn’t encroach on intuition’s territory. It is true that Kant spoke of sensory “intuitions”, but he is an exception—and anyway he didn’t mean to claim that perception is a species of rational intuition in the style of classical rationalists (what he did mean is open to interpretation). The pivotal point is that perception and intuition have been taken as separate and distinct—indeed, as mutually exclusive. I will argue that this is wrong, deeply so.

The operative considerations are not unfamiliar, but their significance has not been fully appreciated. A seen object presents a surface to the eye; it doesn’t present all its surfaces (or its interior). It has a front and a back. The back is not visible. The viewer has no sense-datum of the back side of the object. Yet the hidden side doesn’t go unrecognized; it isn’t omitted from the viewer’s total perceptual experience. He knows it is there, as much as the facing side. We might say that he has a sense of it but not an impression. A being with eyes on stalks that can view the object from all angles would have an impression of all aspects of the object; there would be no need to fill in the gap with…what? What word should we use? Should we say postulation, or imagination, or hypothesis, or inductive reasoning? These all sound too intellectualist, too deliberate, though not without a certain suggestiveness—an extra mental act has to be performed beyond mere retinal responsiveness (the proximal stimulus). The given must be supplemented somehow. I think the best word is intuition, defined as follows: “the ability to understand something immediately, without the need for conscious reasoning”; “instinctive” (OED). We can paraphrase this as, “the ability to know something instinctively without explicit rational thought”. The emphasis is on the pre-rational, automatic, pre-conscious, implicit, unreflective, non-conceptual, taken-for-granted, primitive. Little children can do it, also animals. It is probably largely innate. Clearly, it is vital to survival (you have to know that things have unseen sides). We can add it to Russell’s “knowledge by acquaintance” and “knowledge by description”—this is “knowledge by intuition”. We are not acquainted with the back sides of objects, but neither do we infer them by discursive methods (“the hidden side of the object whose front side I am now acquainted with”). Intuitive knowledge is something different from acquaintance knowledge and descriptive knowledge, as conceived by Russell. It is a step up from mere acquaintance and a step down from conceptualized description. Thus, we can say that ordinary perceptual experience involves intuition as well as sensation. In fact, it would be possible for intuition to exist in the absence of sensation, as in a case of blindsight: someone might intuit an object by means of the eyes and be incapable of receiving visual sensations—seeing would be nothing but intuiting. As things stand, however, seeing is an amalgam of the two—part sense-datum and part intuitive apprehension. It has a kind of dual intentionality. Intuition is integral to perception, pace the empiricists. We might say that they were guilty of an aspect-object confusion: they thought the perceived object was nothing but the aspect presented, forgetting that objects are seen as having hidden aspects too. The perceptual is infused with the intuitive, and to that degree overlaps with other sorts of intuitive apprehension, as with apprehension of numbers. A being with all-inclusive vision (eyes on stalks) might view normal human and animal vision as decidedly in the intuitive category along with other varieties of intuition, and to that degree epistemically suspicious. What is this “intuition” that outstrips good old-fashioned seeing—the kind where you have sensations of the whole object. Now that is the type of seeing you could rest a whole epistemology on! That would be true empiricism, not this semi-intuitive nonsense—what even is intuition? For these beings, there is only pure sensation (acquaintance) and conscious reasoning therefrom, not a peculiar kind of intuition that steps in to take up the perceptual slack. These beings are hyper-empiricists, the genuine article.

How about mathematical intuition? This is a large subject, but let me focus on thinking of the number 2. We don’t normally think of this as a type of sense perception; we think of it as pure intuition—“mathematical intuition”. It could be performed in the complete absence of any sensory materials. But in the human case this is clearly not so: such intuition comes surrounded by sense perception. I think of the number 2 when I see or hear the numeral “2”, or when I see two chickens cross the road. Could I think of that number in the absence of any such experience? It’s hard to say, given that we are deeply sensory beings—though not exclusively so. As a matter of psychological fact, we think of numbers with the aid of sensory material—as we perceive objects with the aid of intuition. Numbers present themselves to our mind under a dual guise—abstractly and concretely, rationally and perceptually. In particular, numerical symbols play a vital role in mathematical thought, which is why advances in mathematical notation were advances in mathematics itself. There is a reason why mathematical formalism is an attractive doctrine and Platonism feels like a stretch—it is easy to commit a sign-object error in mathematics. It is as if numbers come disguised as numerals and we have to see through the disguise. Thus, mathematical intuition is drenched in sense perception; it is partly “empirical”. The end is abstract but the means is (partly) concrete. We don’t apprehend numbers in the complete absence of sensory experience. Pure rationalism is not psychologically realistic. Thus, in the human case intuition needs perception, as perception needs intuition. This may not be the ideal situation, epistemologically, but it is the actual situation.

Accordingly, classical empiricism is not true and classical rationalism is not true. We need a mixed epistemology. What should we call it? Rational empiricism? Empirical rationalism? The trouble is both terms are tainted with the same mistake, i.e., exclusiveness. The point of the view I am suggesting is that traditional epistemology is too dichotomous, so we need a more inclusive unitary label. We could try “intuitionism” but that already has an established use and fails to capture the sensory element. I have toyed with “quasi-intuitionism” and “experiential intuitionism”, but for various reasons don’t like them much. The best I have able to come up with is “general intuitionism”: it captures the idea of extending intuition into the theory of perception, thus unifying epistemology; and it echoes Einstein’s “General Theory of Relativity”, with its attempt to integrate apparently different domains. The thought is that intuition (in the sense defined) is more ubiquitous than has been supposed, more fundamental; it’s everywhere. Perception is not an intuition-free zone, capable of standing apart from other areas in which intuition has been taken seriously (mathematics, ethics, logical analysis). We don’t need to preach the prevalence of perception in the theory of knowledge; it has had enough propaganda on its behalf already. So, “general intuitionism” it is. The idea is not to claim that the concept of intuition will reduce the field of knowledge to something more natural or better understood; there is plenty about intuition that is obscure and ill-understood. But it is real and theoretically indispensable. It is a biological fact about the human (and animal) mind, akin to creativity and problem-solving. It is obscurely linked to imagination. In epistemology, it serves to overcome a simplistic dichotomy that has plagued the subject—the dichotomy between sense perception, on the one hand, and rational thinking, on the other. These are not as disjoint as has been supposed, though they are clearly different in many ways.[1]

[1] Intuition was not a concept in good standing with classical empiricists and rationalists, because neither theory can find room for it within their official platforms. Empiricism finds it an embarrassment on account of its non-sensory character—it seems like an upsurge of rationalism at the heart of perception. Rationalism doesn’t care for its instinctual animal character, its bypassing of conscious calculated reason—an upsurge of the primitive in the rational soul. Intuition makes man an intuiting being as well as a thinking being—a sort of spontaneous leaper in the dark. How can it be rationally defended? Knowing things intuitively seems to the rationalist like not knowing them at all—a kind of guessing. The empiricist, for his part, balks at the foundation of knowledge presupposing resources not derivable from brute sensation. Thus, empiricism and rationalism are constitutionally anti-intuitionist. Intuition represents an epistemological viewpoint alien to them both. To me it seems like a rich field for future investigation, not something to either contemptuously discard or flakily celebrate. I look forward to the Journal of Intuition Studies.

Share

Embarrassed Empiricism

Embarrassed Empiricism

Let empiricism be the doctrine that all reality is observable, in principle if not in practice (that last qualification covers a multitude of sins). There is no reality but observable reality, i.e., what is perceivable by the five human senses, particularly vision. This is surely the main dogma of empiricism. The doctrine can be weakened so as not to claim that all reality is observable, but only certain sectors of reality, such as material reality. Historically, Plato may be viewed as the arch foe of empiricism, since he held that REALITY is never observable: the world of Forms cannot be perceived by the senses. What can be revealed to the senses is not real but merely apparent. For him, reality and the senses are disjoint domains. Aristotle moved away from this in the direction of empiricism, and later philosophy followed suit. Modern empiricism contends that what is real (really real) is what the senses present to us; the more removed from the senses the less real things become (“logical fictions”, “posits”). Knowledge rests on observation, so that in its absence knowledge becomes questionable. Meaning, too, is supposed to depend on sense experience. Almost everyone these days is an empiricist in this sense: reality and observation make contact at some point and generally overlap. No one thinks that reality is never observed; no one believes that the touchstone of reality is imperceptibility. That would be a radical anti-empiricism: to be real is to be not observable. I am going to argue that this doctrine is actually true.

Empiricism has always been in retreat from its main tenet. An exception had to be made for experience itself: itis not observable (the unobservability of observation). We don’t see seeing—in ourselves or others. Yet it is deemed real by the empiricist. How could it not be if it is the test of reality? But it violates the main doctrine: it eludes the senses. Locke accepted that the minute corpuscles that constitute matter are not perceptible yet are entirely real. In Berkeley’s system spirits, finite and infinite, are not objects of perception, but are ontologically fundamental; and ideas are mental entities, and hence not observable by means of the senses. Hume thought we had no sense impression of causal necessity, but he didn’t think this counted against its reality. The world is one thing, sense experience another. The logical positivists made verification the measure of meaning (and hence existence), but allowed for indirect verification of various kinds—the past, the future, the remote, the microscopic, the dispositional, the counterfactual. Many things are real but don’t admit of direct observation, even in principle. Newtonian gravity supplied an instructive case: it was real and scientifically basic but completely unobservable (hence “occult”). We can perceive its effects but not the force itself. Indeed, the concept of force, central to physics, violates the empiricist principle (like the concept of law), because forces are not potential objects of perception—and were therefore viewed with suspicion by empiricist physicists. In fact, all the postulates of physics have slowly moved into the category of the unobservable, notably fundamental particles. How often have we heard that atoms are not little solid extended objects but packets of energy, nodes in a force field, mere potentialities? It is not just the fact that they are tiny that makes them hard to touch and see but rather their inner nature—they are not the kind of thing our senses can latch onto. Yet they constitute the things we can observe (as we naively suppose). Thus, we get Eddington’s two tables: the table of commonsense and the table of physics—the latter being the true reality. The table of physics is held not to be observable at all. Nor is any so-called physical object, even very big ones. In other words, science itself, an empirical discipline, has concluded that the physical world is not observable, except indirectly and misleadingly. It may cause our inner perceptions but it isn’t perceptible—revealed to sense perception, transparently given. The manifest image and the scientific image have fallen apart—the former is not an accurate guide to the latter. Hence, the real is not coterminous with the observable; it is purely “theoretical”. The objective physical world is not, inherently, an observable world. Much the same was held by sense-data theorists: we observe sense-data but we don’t observe their external causes; but these causes constitute the physical world as it really is.

We have now reached a rather startling conclusion: nothing is observable! We knew that mathematical and moral reality are not observable, and we are easily persuaded that minds are not observable,[1] or causation, laws, necessity, time, space, the infinite—but now we are told that nothing physical is perceptible either. This is not conducive to empiricist principles. The real is the opposite of observable; the touchtone of reality is unobservability! It may be replied that this must be an overstatement, because we can surely observe trees, mountains, animals, our own body. But no, these things are all made of unobservable entities, so are not really perceivable as they objectively are. They are illusions of a sort—products of our senses and mind. They arise from the interaction between mind and world; they don’t exist in objective reality but are a kind of projection. There are no colored objects in objective reality, or solid objects, or objects that persist through time whole and entire; there are just collocations of basic unobservable particles. Again, these are familiar reflections; my point is that they fly in the face of what might be called “naïve empiricism”. According to the world-view just outlined, the real is unobserved while the unreal is observed. Taken together, we obtain a picture of reality radically divorced from the human senses—absolutely nothing is observable in the sense that empiricism supposes (though we may allow for a looser use of “observation” to describe our evidence-seeking activities). Perceptions may be signs of real things, but they are not of real things (except in a weakly de re sense). Our senses do not reveal or display or describe or picture reality as it is in itself; they merely provide simulacra, correlates, indications. This is not just the old point that all observation is theory-laden (which is probably false); it is the more radical claim that observation and reality don’t match, dovetail, coincide. Reality is not such that the senses can get a firm grip on it (perhaps a slippery grip is possible).  This is a general—indeed, universal—truth about reality: it is never given to the senses, never the direct object of a perceptual act. Our senses, derived from the senses of our animal ancestors, are just not set up to deliver reality as it really is. They are not veridical in the required sense: they don’t disclose things as they are in themselves; they distort and mislead and under-describe. No doubt there are solid evolutionary reasons for this. There are empirical reasons for why empiricism is unlikely to have much truth to it. In fact, it is plausible to suppose that empiricism is a remnant of a prescientific religious age in which the human mind is tacitly understood by reference to the mind of a supposed omniscient supernatural being. If empiricism were true, theism would have to be; but as it isn’t, it ain’t (as the saying goes).

It might well be protested that this leaves us bereft. A generalized rationalism cannot replace the lost empiricism, because rationalism can’t cover the region we think of as “empirical” and was never intended to. Our knowledge of the empirical sciences can’t be founded on rational intuition alone; it needs the senses to deliver evidence. It is just that evidence cannot be conceived as veridical revelatory observational episodes. So, we have no satisfactory epistemology to speak of. Evidently, we have to conceive of the relation between experience and fact differently—not as revelation but as correlation (or something like it). Experience is correlated lawfully with reality, permitting us to draw inferences from the one to the other. It isn’t that the objective world is entirely noumenal; it just isn’t perceptible in the way classical empiricism supposed (it could be perceptible in other weaker ways). Appearance and reality correspond, but they don’t coincide. No doubt there is an element of mystery about this correspondence relation, but mystery is better than error (as the prophet said). Empiricism is far too optimistic about the world-experience nexus—far too unmysterious—but we may have to accept that its apparent clarity is delusive. The structure of human knowledge is much like that envisaged by Plato, which is not surprising given the affinity between Plato and Kant (as recognized by Schopenhauer): reality as essentially imperceptible by the senses, the built-in limitations of sensory observation, yet a mysterious correlation between the two. The difference is that Plato’s Forms are replaced by unobservable entities of various kinds—the whole of reality in fact. We might call this “scientific Platonism”, in honor of that towering anti-empiricist. Basically, the senses are stuck in a cave. We have to reason our way out of the cave in order to make epistemic contact with reality. The senses can do nothing without that kind of reasoning to back them up. That is the big picture, the grand vision—with seventeenth century empiricism and its aftermath a mere historical blip. We have to reinvent Plato.[2]

[1] They are not observable in others and not observable in ourselves. Introspection (whatever it may be) is not a form of observation—we don’t apprehend our own mental states with our senses.

[2] It appears to me that philosophy has been retreating from seventeenth century empiricism since its very inception—yet desperately trying to hang on to its core dogma. All of it has been shaped by the legacy of empiricism and the retreat from it—epistemology, metaphysics, philosophy of language, philosophy of mind, ethics. Thus, the work of Russell, Frege, Wittgenstein, Moore, Quine, Strawson, Ayer, Austin, Kripke, Davidson, Dummett, Chomsky, Popper, Husserl, and many others. The time has come to abandon it entirely, give it the boot, stop trying to salvage it, acknowledge its utter bankruptcy. It has exercised far too powerful a hold over the philosophical (and scientific) imagination for too long. We need a post-empiricist philosophy, one not obsessed with that outmoded school of thought devised some three hundred years ago in the British Isles (hence British empiricism—all too British). It isn’t compulsory; it isn’t a religion. From whence does its hold derive? Probably from the primitive feeling that sight and touch are our primary ways of relating to reality, particularly the mother. We can’t let go of the feeling that if we lose empiricism we lose our mother, our ultimate source of security in an alien world. Surely, she is directly observable! Surely, she is the basis of all that is good and wholesome and life-saving! Something like this anyway, because the psychological roots of empiricism clearly run deep. Losing empiricism is uncomfortably close to maternal deprivation. Our brain is naturally set up to accept empiricism, implausible as it may be. We have been imprinted on it.

Share