Facts

Facts

In 2012 the University of Miami accused me of failure to report a romantic relationship. It is true that I did not report a romantic relationship, and it is also true that failure so to report is against the rules. But I was not having a romantic relationship under any normal definition, so there was nothing to report. Soon after this I resigned, because I no longer wished to work at that institution.

Share

Epistemic Necessity and the Self

Epistemic Necessity and the Self

Consider the two statements, “This table necessarily exists” and “I necessarily exist”, where “necessarily” is construed epistemically (“I could not be wrong that”). The former is clearly false, the latter apparently true. Why is the former false? It is false because I could be hallucinating or dreaming or otherwise under a misleading impression. Putting it in the jargon of epistemic contingency, there could be an epistemic counterpart of my present experience as of a table that is not of an actual table. I could be a brain in a vat imagining a table not really seeing one. We cannot deduce the existence of a table from an impression of a table; the two things can exist independently. There might not be any tables, though there are many instances of seeming to see a table. In normal life, such hallucinations occur, and they undermine claims to certainty (as the skeptic argues). This is all very intuitive and virtually inarguable; it doesn’t violate common sense or invoke recherche possibilities. Beliefs about the external world are epistemically contingent, notoriously and obviously. You don’t need to be a trained analytical philosopher, adept in far-out thought experiments, to see the point; being a nightly dreamer suffices. But the self is another matter, or so it has been thought: this is the domain of the Cogito and its associated convictions (“I know that I exist”). However, it has not been easy to trace out the logic of this intuitive conviction—the classic Cogito has met with a good deal of resistance. In what follows I will attempt to articulate the underlying reasoning behind the belief that the existence of the self is an epistemic necessity—the thought that I necessarily exist. Put more cautiously, I will spell out the difference between material objects and selves in respect of epistemic necessity.

It will be helpful to have a stalking-horse: this is the contention that I may have epistemic counterparts that have no associated self—mental states that are indistinguishable from mine but correspond to no self. These are conceived as “free-floating”, not anchored to any entity that has them. In the case of my actual mental states, we can suppose for the sake of argument that a subject exists, but we can imagine an epistemic counterpart of these states that has no subject—so how do I know that this is not my situation? It would seem to me just as it does now, but there would be no existent subject. It would be subjectively the same but ontologically different: same appearance (phenomenology) but different reality (ontology)—self versus no-self. Couldn’t it be just like the table case? I think not. First, we must ask whether the epistemic counterpart conveys an impression of selfhood: does it seem to itself to have a corresponding self, or is it neutral on the question, or even negative? Well, does it seem to me as I now am that my experiences have a subject distinct from them? I think the answer is yes. It seems to me that my experiences have intentionality—they are as of various states of affairs—and it also seems to me that they have a subject, viz. me. That is, they have an as-ofcontent and an as-by content: as of a certain object and as by a certain subject. That is, an impression of selfhood is present in my experiences, as well as an impression of objecthood (at it might be, a table). Here a neologism will be useful: states of consciousness have both intentionality and “subjectality”. They are intrinsically subject-indicating (referring, representing); they are not subject-denying, or subject-neutral. Subjectality is part of phenomenology. Thus, any epistemic counterpart to my current state of consciousness will also have subjectality, by definition of epistemic counterpart. Put simply, it will seem to itself to have a subject—as my consciousness seems to have a subject. Consciousness is I-referential. But of course, it doesn’t follow from this that the subject referred to actually exists—maybe this is an error on the part of consciousness. Maybe consciousness hallucinates a self—as it can hallucinate a table. At this point things start to get hairy: for what kind of hallucination might that be? Are we familiar with such hallucinations, has anyone ever had one, how might they be produced? In the case of external objects these questions are readily answered: yes, yes, and easily. We have a model, a theory, of sensory hallucination: it happens a lot and is easily brought about. It isn’t just philosophical word-spinning. But the same isn’t true of supposed self-hallucination: have you ever heard of someone being under the illusion that they have a self? Are there patients in psychiatric wards suffering from hallucinations of selves? That is, we know they have no self (their states of consciousness have no subject) and yet they are under the impression that they have a self. The mind boggles: what could this even mean? We are being asked to accept that there could be, or are, cases in which a mind seems to itself to have a subject, an “I”, but doesn’t really. Surely, that is not possible; or if it is, such cases never actually occur and are impossible to comprehend. They are certainly not part of common sense and everyday life. Of course, there might, as a matter metaphysical possibility, be cases of people with a sense of self that have no body: you can hallucinate having a body. But there are no cases of people under the illusion that their mental states are had by someone (something). No one ever has the feeling that their consciousness is had but actually it is not had. We cannot make sense of subjectality without a subject. We can say the words, but we can’t provide any examples, or explain how the hallucination works, or suggest how it might be mended. So, the very thing that powers the intuition that this table’s existence is not known with certainty is absent in the case of the self. Hence, we quickly see the epistemic contingency in the table case, but not in the self case—here we are presented with just a jumble (or jungle) of words. The skeptic is limping at this point, but with tables he is off to the races. Maybe he can wheel in extra machinery (the skeptic is nothing if not resourceful), but he cannot rely on commonplace facts and powerful intuitions. He thus has a lot of work to do; he can’t just point to the existence of hallucinations and dreams (do you know of a case in which a person without a self had a dream in which it seemed to him that he had a self?). I am strongly of the opinion that there cannot be errors of selfhood—cases which subjectality is present but not a corresponding subject—but I have no direct proof of this. The point I am making is that the model of the table won’t work to derive skepticism about the existence of the self. Hallucinations of external objects are facts of nature; hallucinations of selves are figments of the philosophical imagination—would-be thought experiments not empirical facts. This is what lies behind our ready acceptance of the epistemic contingency of “This table exists” and our resistance to a like conclusion about “I exist”. The latter strikes us as a lot more necessary than the former (as that is more necessary than “Dark matter exists”). Epistemic necessity comes in degrees, and the self is at the high end of the spectrum (though perhaps slightly less high than “This pain exists”). What is interesting are the reasons for the difference, specifically the absence of demonstrable hallucinations of the self. The thought never occurs to us that our impression of our existence as a conscious self might be a lifelong delusion, possibly not shared by others, precisely because no such cases have ever been recorded. We might become convinced of it by a philosophical or scientific argument against the existence of the self (though I know of none such that really succeed), but we won’t be budged just by pointing to mistakes induced by hallucinations—because there are none. Our position ought to be, “Unless you can prove to me that the self doesn’t exist, I see no reason to abandon my strong (certain) belief that my self exists”. We would be right, however, to refrain from such a pronouncement regarding the table, given what we know about the human nervous system and the powers of certain drugs (you might have hallucinated a table only yesterday).

How do these points bear on the Cogito? Not very directly. That is a different argument altogether, proceeding from the existence of thoughts to the existence of a subject (a substance) that has them. It has been questioned on a number of grounds, persuasively enough. Descartes never argues that his premise about thought includes a thesis about self-indication (subjectality); nor does he invoke considerations about the constitutive conditions of hallucination. However, it may well be that the points I have adduced are subconsciously influencing our response to the Cogito, giving it an appearance of cogency it might not otherwise possess. Reasons for accepting a philosophical claim do not always coincide with the content of that claim (indeed, they often diverge).[1]

[1] Let me make clear, if it is not already, that I am working with a minimal view of the self or subject (as was Descartes). I don’t mean an animal with a certain kind of body (a human being), or a persisting self, or a type of substance, or a unified self, or even a knowable self; I just mean a thing that acts as a bearer (logical subject) of a mental state—something that has it. That could be ever so etiolated, so long as it doesn’t collapse into the mental state it is supposed to bear. The idea, then, is that states of consciousness make it seem as if they are had or possessed by something distinct from themselves (the conscious states don’t have themselves). We know with virtual certainty that this thing exists, however it may be with “thicker” things.

Share

On Serving

On Serving

In tennis the serve has gone through an evolution. In the early days the serve was not a weapon, just a way to start the point. The players were English aristocrats at country houses not crack athletes. The service area was designed to allow the server to have enough space to get the ball in and the receiver not to have too much trouble returning it. There was no significant advantage to the server. In the modern game, recreational players are seldom good enough to gain much advantage by serving as opposed to receiving, but professional players have a considerable advantage. So much so that it would make sense to reduce the size of the service area by a couple of feet, so that the server had to slow it down to get the ball in; or permit only single serve attempt. Then the balance between server and receiver would be restored. In an ideal world such a change might be implemented (what if players got so good at serving that the receiver never won a point against serve?). The professional game now is too serve-dominated. This would also help the shorter player because you need height to get the ball in while hitting it hard. Maybe there should be two types of court so that you could choose what kind of serve to expect. Or three, because most amateur players find it too difficult to get the ball in under the present dimensions. The game could be improved for everyone by implementing these changes. The serve has far too much importance as the game stands (pickle ball may owe some of its popularity to these serving issues in tennis).

The table tennis serve has its own issues. Here the problem is that even intermediate players enjoy a large serve advantage: the server will generally dominate and wilt while receiving. The player with the better serve is guaranteed to win overall. (I know this because people I play with usually can’t return my serve.) This is an unsatisfactory state of affairs—there should be rules of serving that don’t create this asymmetry. With this in mind I have invented a new way of serving in table tennis, by copying a feature of the regular tennis serve: alternate side serving. First you serve into the left side of the table, next into the right side (there is always a line down the center of the table). That way the receiver knows which side of the table he will be returning the ball on, which makes his job a lot easier. It would be possible to combine this with another feature of the tennis serve—a service area smaller than the whole side of the table. This would reduce speed and hence make things easier for the receiver, just like tennis. Short balls are always easier to hit than deep balls. I tried out the alternate side method the other day with another player and we both found it enjoyable and workable: the points were not all about returning the serve. They were longer and more varied. It wasn’t just a matter of whether he can return my serve and I can return his. The serve wasn’t the be-all and end-all. I am going to adopt this rule from now on. I recommend it.

Share

Again, Supervenience

Again, Supervenience

There is a well-known problem with physicalism: the problem of defining it. Briefly: do we mean the mind is reducible to the brain as now understood, or do we mean the mind is reducible to the brain as it may be understood in the future? Do we mean current actual neuroscience or do we mean a future possible neuroscience? The former seems too restrictive and the latter too open-ended. I am not going to go over this problem now; my aim is to extend the argument to supervenience claims. The mind is held to supervene on physical properties of the body and brain, but what is meant by “physical” here? The dilemma is that this will either be too restrictive or too liberal, and hence either false or empty. Is there any version of the claim that can escape this dilemma? Suppose supervenience were propounded two hundred years ago when very little was known about the brain, specifically the nerve impulse; perhaps only the brain’s gross anatomy was known. Is the mind supervenient on that? Clearly not, because a completely insensate object could share that gross anatomy (e.g., a plastic model of a brain). The details of neurons and neural impulses matter. And it would be premature to suppose that nothing further will be discovered as to the brain’s neural properties. We should not limit the supervenience base to what we currently know of the brain. But then we are left with a lack of clarity about what the “physical” supervenience base consists in.

There are subsidiary questions. Is anything about the mind supervenient on what we now know of the brain? That does not seem out of the question: perhaps dispositional properties of mental states are so supervenient, or some structural and temporal properties. Not qualitative intrinsic properties but behavioral ones: maybe these depend wholly on neural impulses as we now understand them. The mind isn’t then wholly physically supervenient in this sense but it is partially so supervenient. Or perhaps not. The question is more empirical than a priori. Next: Is the mental supervenient on a subset of the brain’s properties not all of them? That is surely very plausible: some properties will be irrelevant to the mind, e.g., the color of the brain or its particular shape or whether it tastes good with spinach. It would be nice to know which of the known properties of the brain form the supervenience base. The bare assertion that the mind depends on the brain is unilluminating because much too coarse—we want to know which physical properties are crucial. We know that some parts of the brain have no mental correlate (e.g., the brain stem), so what is it that makes other parts of the brain capable of generating the mind? What is the physical difference? It can’t be the presence of the neural impulse, because that does not always lead to a mental correlate. Supervenience alone tells us almost nothing of interest; and it is either false or an empty truism. What we need is a theory that specifies precisely what aspects of the brain form the supervenience base. The muscles are supervenient on the body—yes, but what aspects of the body? If we knew the answer to that, we would be close to explaining the supervenience; but then, we wouldn’t be limited to a bare supervenience claim. Such a claim is next to useless unless it permits of conversion to an explanatory theory, but then it isn’t necessary. In short, brute supervenience is a pointless idea—an empty slogan. It should be banned.

Suppose we held that mental properties are physical properties of the brain, in some workable sense of “physical”. Then, trivially, mental properties would be supervenient on physical brain properties. That would be a doctrine of zero interest. No, we must mean by “physical” non-mental. So, the claim must be that the mental is supervenient on the non-mental. Do we really want to say that? We must rigorously exclude any hint of the mental from the putative non-mental supervenience base (so panpsychist properties must be excluded). This would mean the supervenience base would have to be compatible with the absence of the mental—but that contradicts supervenience! For example, if electrical properties were held to constitute the supervenience base, we would face the objection that electricity doesn’t entail mentality, which contradicts supervenience. But electricity is exactly what the brain traffics in, so it can’t be sufficient. How can the mental supervene on the non-mental, i.e., that which doesn’t require a mental correlate? If, however, the mental supervenes on some sort of proto-mental or quasi-mental base, then we don’t have supervenience on the non-mental. Either way psychophysical supervenience doesn’t work. Perhaps we can identify some properties of mental states that supervene on electro-chemical properties of the brain, such as dispositional-behavioral properties; but that is not to say that any perfectly general supervenience claim can be intelligibly formulated. Supervenience on what is known of the brain seems to be either relatively trivial or false. It is really a foggy magical idea captured in a fancy word. All we have is the idea that the mind might be semi-supervenient on the properties of the brain currently recognized (and as currently recognized). For there is really no content to the claim that the mind (as a whole) is supervenient on the physical properties of the brain (present or future). It is certainly not the case that mentality supervenes on electricity and chemistry, or else mentality would be everywhere. As the term “physical” is understood now, it is clearly false that the mind is supervenient on the physical; and no other sense can be plausibly stipulated.[1]

[1] The fact is that physical supervenience inherits all the disadvantages of physicalism in respect of definition but none of the advantages in respect of ontology. It doesn’t tell us the nature of the mental and how it exists in the brain, and it makes no progress with saying what physicalist doctrines amount to. So, why has it been such a popular idea? Because it papers over the difficulties. The analogous doctrine in ethics tells us nothing about the nature of the ethical and also faces the problem of saying what precisely the supervenience base is to include and exclude (what exactly is a “descriptive” property?).

Share

Freedom and Tariffs

Freedom and Tariffs

Tariffs raise prices on imported goods, as importers pass on costs to consumers. This decreases demand, by the basic laws of economics. It may reduce it to zero. This means that consumers don’t buy what they would have bought if it were not for tariffs. They would prefer to buy what they no longer buy, but the prices have become prohibitive. For example, they would prefer to buy a foreign car, but they can no longer afford one, so they buy a cheaper domestic car—they buy a Ford not a Ferrari, say. They would much rather have the foreign car, but they settle for the homegrown car. They are accordingly less happy than they would have been without tariffs. They are not getting what they want. The same goes for food, clothes, etc. Tariffs reduce quality of life for consumers. They also entail that money doesn’t flow into the country that produces the goods in question, thus reducing its purchasing power. That means that the producing country has less to spend on foreign goods, which reduces demand for them. Thus, supply will drop in that country, because demand has dropped. This will lower the prosperity of the producing country, reducing the quality of life of its inhabitants—they will have less than they want. Tariffs impose reductions in the standard of living of the people imposing the tariffs (as well as those living in the tariffed country). This much is fairly self-evident economic reality: tariffs don’t add to human happiness. But there is also a political dimension to this, not often remarked upon: tariffs reduce freedom. They make a society less free, by curtailing economic choice. You would choose a foreign car if you could, but you can’t because of tariff-induced price rises; so you settle for what you would prefer not to have. You settle for a clunker when you could have had a racer. The situation is uncomfortably similar to communist systems of production and consumption: state-produced goods that you are forced to purchase instead of high-quality goods from abroad. Consumers have had their economic freedom curtailed: they can’t buy what they want in a free market, but are forced to buy what they don’t want. Tariffs are inherently anti-freedom. Free markets are markets in which people are free; tariffed markets are not free. If manufacturing at home is inferior to manufacturing abroad, people end up less well off than they would be under free market conditions. They are living under economic tyranny, in effect. If you value freedom politically, you should be against tariffs (except under special conditions). At best they are a necessary evil, but they are clearly an evil from a libertarian point of view. They do not promote liberty. They are not a form of liberation but of constraint. They are a type of economic incarceration or prohibition.

Share

Due Process

Due Process

The Fifth Amendment of the United States constitution states: “No person shall be deprived of life, liberty, or property, without due process of law.” This condition derives from the Magna Carta, clause 39, of 1354. Due process of law requires, at a minimum, notice of alleged offence, a proper hearing, and a neutral judge. It is tantamount to the insistence that punishment must be applied only when the law of the land has been properly consulted and a legal procedure has been completed. It is thus decreed to be unlawful not to undertake and finalize a procedure of due process in cases of potential punishment. This is evidently a basic foundation of a civil (and civilized) society and must not be breached by those with punishment power. Without it, society descends into injustice, chaos, and barbarity. We should all stand behind this vital principle. It is ethically correct and legally binding. Those who violate it should be held accountable and punished accordingly: that is, they too may be deprived of life, liberty, or property. If an official deprives a person of life, liberty, or property, they may themselves be legally deprived of life, liberty, or property, if due process has not been observed. In an extreme case, if you deprive a person of life without due process, that may be considered a case of murder. Summary execution of a person without due process is liable to capital punishment, or some other serious form of punishment. To kill without legal justification is a capital crime. The law requires due process, so the law can step in if infringed in this respect. On the other hand, if due process is properly applied, there is no legal liability for the prosecutor. All this is clear and indisputable. It is long enshrined and justifiably enforced. Violation of due process is deemed a criminal act—and it is easy to see why this is so. It prevents unjust harm to innocent individuals. All civilized people adhere to it.

Or do they? Recently we have seen a number of instances of due process violations, carried out by the American government, leading to deportation and imprisonment. According to the US constitution, these are criminal acts (“No person shall” etc.). Of course, those alleged to have committed these crimes must be afforded due process: they must be allowed to defend themselves in a court of law properly constituted. We don’t want to punish people for violating other people’s due process rights without according them due process. If they are found guilty, they should receive appropriate punishment—some sort of deprivation. If they deported or exiled someone without due process, that person being innocent of the alleged crime, it would appear reasonable to apply the same kind of punishment. It might well be decided that we don’t want violators of due process laws living among us, where they might repeat their crimes—thus they might be deported and exiled. Or we could imprison them and fine them. That is a matter of detail and general policy. But suppose that we did institute deportation laws against people who wrongfully deport others by ignoring their due process rights. Then we would be deporting government officials who illegally deported others. Suppose an official knowingly deported a suspected criminal to a prison in which he would likely be murdered, and he did this in clear violation of the accused’s due process rights—arresting him in the middle of the night and forcing him onto a plane destined for said prison. The prisoner is duly murdered, as the official knew would happen. One might think that such an action should receive severe punishment, perhaps punishment in kind. The official is tried in a court of law respecting full due process rights and found guilty. Surely, he must be held accountable for his actions. By this standard, government officials engaged in illegal deportations should be treated as criminals: they have broken the law, as enshrined in the US constitution. A court of law may accordingly find them criminally liable.

I imagine the above reasoning will appeal to many left-leaning people. But these people may not be prepared for a corollary of that reasoning, namely that it applies equally to what has come to be called cancelling. In cancelling, persons are deprived of liberty and property without due process of law (not life, though suicide may result from cancellation). They lose jobs, income, access to opportunities, social status, a happy life. They may become exiled within their society, or feel driven to go elsewhere. The punishments are substantial, and they are intended to be. They are treated as criminals, in effect. But there was no due process. There was no formal hearing, no opportunity to reply to accusations in a legal setting, no impartial judge or jury. The punishment was applied based on factors that would not stand up in a court of law (gossip, progressive politics, prejudice, malice). This is rule by extrajudicial means, and notoriously fallible. Anyone guilty of inflicting harm on a person whose due process rights have been violated is himself guilty of a crime—that is, of an act of injustice. A professional body, say, that deprives a person of normal professional opportunities without any attempt at due process is guilty of due process violation. In effect, it is criminal. The same is true of individuals that actively thwart the normal freedoms of accused people, e.g., the freedom to go to a conference or be considered for a job on their professional merits. Due process is essential to all punitive measures; you can’t proceed without it. If for any reason it cannot be carried out, no one is entitled to assume guilt: if you have not been found guilty of a crime by a judicial process, you cannot be deemed guilty. That is our system, and it is a very good system. But cancelling is rampant in our culture today, which means that violations of due process rights are rampant. The proper conclusion, then, is that the perpetrators of these illegal acts should be held accountable and duly punished. Particularly egregious instances should be punished severely: we might want to consider deportation, prison time, or heavy fines. This implies that large numbers of professors in universities are guilty of a crime, i.e., harming others without due process. In my own subject, philosophy, there are numerous individuals who are guilty of the crime in question: deportation would seem a reasonable option (or am I being too harsh?). Systematic organized efforts to destroy a person’s career, without even a hint of due process, should be treated severely: they violate basic principles of law and morality. I can think of dozens of people who, in a perfect world, should be exiled from polite society for committing the crime in question. But I would not endorse this punishment without due process: punishment for violations of due process should not violate due process. I would insist on this provision in an individual case: you won’t be found guilty of it unless a court of law (or some equivalent) has duly carried out the appropriate procedures. I would recommend firing anybody thus found guilty—it is a serious violation of the law. As things stand, I would say that most of the professors of philosophy in the United States are criminals. They deserve what they have inflicted on others, just as government officials who deport immigrants in violation of due process laws deserve legally sanctioned consequences. Violations of due process are violations of due process, no matter who carries them out. We cannot treat people as guilty unless they have been found guilty.[1]

[1] This is not the same as thinking they are guilty, or even being certain of it; they must be declared guilty by a properly appointed judicial body. It is really quite amazing that so many people are ready to abandon this basic principle of justice when it suits them to do so. We should insist on it especially in cases in which we are convinced that the accused party is guilty (compare free speech).

Share

Convergence, Truth, and History

Convergence, Truth, and History

We tend to converge on the truth. Independent investigators often arrive at the same truth because it is the truth. If investigators are not independent, their coinciding beliefs may well be explained by influence not truth: they have the same beliefs because of interpersonal contact. Convergence of belief in independent investigators is apt to be a sign of truth because otherwise it would be a coincidence. The probability of truth rises with the number of independent investigators agreeing in their beliefs. Not so with dependent investigators: influence can easily explain agreement.  There is agreement by way of objective truth and agreement by way of personal influence. If only one person arrives at a given belief, that doesn’t bode well for truth, no matter how much influence he or she may have. Belief by influence is no guarantee of truth; belief under conditions of independence is an indicator of truth—not infallible, but highly suggestive. Convergence goes with fact; lack of convergence goes with fiction, even if there is agreement by influence. This is the difference between a cult and a learned society—whether there is agreement under conditions of independence.

There is an analogue in evolutionary biology: convergent evolution versus inheritance. Some traits evolve multiple times quite independently; there is no inheritance relation between the animals sharing the trait—for example, locomotion and vision. Other traits evolve just once and are then passed on; these tend not to be so widespread—eye color, nose shape. Convergent traits tend to be good traits to have—that’s why they evolve separately. Inherited traits may or may not be good—sometimes they just hang around because they come with the territory. Convergent evolution is a sign of adaptive quality: eyesight would not evolve multiple times if it were not useful, indeed essential. A trait is objectively beneficial or it is not—but if not, it could still be widespread because of inheritance. Eyes are good things to have, but vestigial hairy skin may not be.

This distinction applies to the history of human thought: we have people independently discovering the same thing and people agreeing by virtue of influence. When people come to the same view independently, we tend to suppose that the view in question is likely to be true; when they do so by influence, we think this is less likely (though possible). If millions of people independently arrive at the opinion that the Eiffel tower is tall, we think this is because it is tall; but if the members of a religious cult all have the same opinion, under conditions of influence, we don’t jump to the conclusion that it must be true. This is obvious and uncontroversial. But if we apply it to the actual history of human thought, we notice some interesting facts. On the one hand, there are many instances of convergence: in physics, astronomy, chemistry, biology, and even philosophy. I won’t rehearse all of this, except to remind you of Descartes, Galileo, and Newton in physics and Darwin and Wallace in biology. The same truths were independently discovered, thus adding credence to these discoveries; it wasn’t just a solitary individual who magically hit upon the truth by unrepeatable genius. In philosophy Russell and Moore converged in their opposition to Hegel, as Russell and Frege converged in their logicism. But in certain cases, this was not so—we had the phenomenon of the solitary genius who exercised massive influence. No one else had the same ideas, such was their singular genius—though they had their followers, disciples, and acolytes. Three such thinkers stand out in recent times: Freud, Einstein, and Wittgenstein. Singlehandedly, they revolutionized their subject—they alone arrived at the truth about their respective domains of interest (allegedly: see below). No one else came close: there was no Wallace lurking in the wings. Hence, they are regarded as true geniuses, as less gifted thinkers are not (including Russell and Frege). There was no convergence only influence: many people accepted their theories, though no one else came up with them independently. In this they exceeded their intellectual predecessors, such as those listed above. They saw farther than any other man. But shouldn’t this make us suspicious? How come only they had the brain power to make the discoveries they made? Other people came to accept their theories, but no one else anticipated them, came up with them on their own. Hmmm. What made them stand apart from the rest of the human race? How come the truth spoke only to them? Plenty of other people had their level of intelligence and yet did not happen upon the truths they revealed. That is what we have been encouraged to believe—the story of the lone but influential genius.

The trouble is that their theories (two of them anyway) are now discredited, to one degree or another. Here I have to put my cards on the table: I don’t think any of them spoke the truth, the whole truth, and nothing but the truth. That’s why there was no independent convergence. If they were onto the truth, someone else wouldhave been too; but no one else was, so they were not onto the truth. They may have been onto some truths, but the main body of their work was not true. This is why there is something cultlike about their following: it relies on influence not independent discovery. I think that Freudian psychoanalysis is mainly false or highly dubious, as do many others; the same is true of Wittgenstein’s Tractatus and Investigations; and many have been skeptical of Einstein’s special and general theories of relativity. My aim here is not to defend these opinions (controversial as they are), but merely to point out that the theories in question do not have the support provided by convergent discovery. It isn’t as if other people quite independently came to the same conclusions, which you might expect if those conclusions were true; rather, they were propounded by one man and gained traction and influence. It is notable that they are not easy to grasp (unlike the theory of evolution by natural selection): they feel obscure and arcane, rather startling, contrary to common sense. You are expected to take them on trust, though bits of evidence are dutifully provided. They are creeds, dogmas, pronouncements. Thus, a cult surrounds their progenitors—they are seen as sacred figures. Their images appear on T shirts. We can’t hope to understand them in their full profundity. Wittgenstein, indeed, inspired twocultlike followings. I could say much the same about certain proponents of quantum theory residing in Copenhagen, as well as certain philosophers. Did anyone else ever come up with Quine’s distinctive doctrines, or David Lewis’s? No, they were singular figures: the indeterminacy of translation, possible worlds realism. Both doctrines elicited “incredulous stares” (as Lewis famously remarked)–that is, no one else thought one day, “You know, meaning is a myth” or “Actually, actuality is no more real than non-actuality”. We don’t find independent thinkers happily converging on these startling conclusions; indeed, Quine and Lewis were virtually alone in confidently propounding them. Nor did colleagues say to themselves, upon hearing these doctrines, “Good heavens, he’s right, I see it now!”. It wasn’t like Darwin, Wallace, and their scientific followers (Huxley, Haldane). In these cases, we don’t have the comfort of convergence; instead, we have an instance of influence emanating from a single source.

We can ask our question about other areas of human life: ethics, politics, art, music, clothes, religion, literature. When do we have convergence and when do we have influence?  Is utilitarianism something that ethicists independently converge on or is it a matter of influence? I would say we have a good deal of convergence, combined with some degree of influence. Isn’t this something that people are likely to converge on, given its evident correctness (at least as a large part of morality)? Democracy in politics is the same—it’s obviously a pretty sound idea that any reflective person could come up with. Art and music are different: here there is no obvious truth for them to converge on—they are more a matter of free creation. The idea of the solitary artistic or musical genius is not merely a piece of airy romanticism (though influence also plays a role). Clothes are a mixed bag, being both practical and aesthetic: trousers, convergence; flared trousers, influence. Utility and fashion operate by different rules. Religions may converge from different places onto the same core ideas—a divine being or beings, priests, collective worship—but they may also propagate by causal influence. The former are more likely to be solidly based than the latter, being more reflective of general human nature. In the case of literature, influence will dominate, but literary forms may be independently arrived at in virtue of their inherent properties (the novel form is clearly good for telling long involved stories). None of these cases will be simple and straightforward, and it may be difficult to discern what is what, but the distinction holds in these areas too. The more objective and universal something is the more likely it will be independently converged upon; the more limited and local the more subject to influence. Memes are more local than facts of nature. Independent convergence will be a sign of veracity or lasting value, though not an infallible sign.

The general drift of these reflections is to throw cold water on the idea of the solitary singular genius, especially in scientific pursuits, including philosophy. If something is true, it is going to be discovered by several individuals, as has frequently happened in the history of human thought. In cases in which someone is hailed as a solitary genius, singlehandedly producing a new idea, we should be on the lookout for error; people just don’t differ that greatly intellectually (Einstein’s IQ was not higher than that of other physicists). When an idea’s time is ripe, it is likely that several minds will latch onto it; if only one mind does, we are likely to be in the land of creative fiction. It may be genius, but it is not true genius.[1]

[1] One might formulate a law of discovery: All true ideas are independently discoverable. No truth is such that only this individual could discover it. If Freud were right, he would have rivals in discovery—in other parts of the world, on other planets. If Wittgenstein were right, there would be a twin Wittgenstein somewhere saying the same thing. If we are resistant to these possibilities, that can only be because we sense they were not right. Darwin is not essential to Darwinism, but Freud strikes us as essential to Freudianism; and Wittgenstein to the doctrines of the Tractatus and Investigations—as no one but James Joyce could have written Ulysses. Berkeley is an interesting case: why does he have no co-discoverers? Would someone else have come up with the same ideas eventually? Is his work really a work of fiction? It does seem like an inherently singular vision. Could anyone else have arrived at the achievements of Shakespeare? By contrast, scientific truth always admits of independent discovery by a plurality of individuals. If Einstein had never lived, would someone else have come up with his theories? I rather doubt it.

Share

Jim and Me

Jim and Me

I was over at the tennis wall at the Biltmore yesterday, as I frequently am. My pal Jim was there, a retired tennis pro. He told me he had just turned 78 and was working on hitting his forehand from shoulder height; he liked to learn new things. He demonstrated the technique and indeed was hitting it quite nicely that way. He then showed me the same stroke on the backhand side, which is appreciably more difficult. It’s an important skill to have because the ball does not always obligingly come in at waist height, but it takes a lot of practice and strength building. I said I was still working on my left hand, as I have been for the last two years. I told him I’d just turned 75. Actually, my left hand has steadily improved, to the point that I can now hit forehands lefthanded fairly well. I then remarked that I had some pain in my right heel that was hampering my movement; this was caused by my skateboard hitting some vegetation on the road that made the wheels stop rotating and pitched me forward. My feet came down quite heavily and bruised the heel (it’s much better today). He gave me some advice on what to do about it. We carried on hitting.

Share