Due Process

Due Process

The Fifth Amendment of the United States constitution states: “No person shall be deprived of life, liberty, or property, without due process of law.” This condition derives from the Magna Carta, clause 39, of 1354. Due process of law requires, at a minimum, notice of alleged offence, a proper hearing, and a neutral judge. It is tantamount to the insistence that punishment must be applied only when the law of the land has been properly consulted and a legal procedure has been completed. It is thus decreed to be unlawful not to undertake and finalize a procedure of due process in cases of potential punishment. This is evidently a basic foundation of a civil (and civilized) society and must not be breached by those with punishment power. Without it, society descends into injustice, chaos, and barbarity. We should all stand behind this vital principle. It is ethically correct and legally binding. Those who violate it should be held accountable and punished accordingly: that is, they too may be deprived of life, liberty, or property. If an official deprives a person of life, liberty, or property, they may themselves be legally deprived of life, liberty, or property, if due process has not been observed. In an extreme case, if you deprive a person of life without due process, that may be considered a case of murder. Summary execution of a person without due process is liable to capital punishment, or some other serious form of punishment. To kill without legal justification is a capital crime. The law requires due process, so the law can step in if infringed in this respect. On the other hand, if due process is properly applied, there is no legal liability for the prosecutor. All this is clear and indisputable. It is long enshrined and justifiably enforced. Violation of due process is deemed a criminal act—and it is easy to see why this is so. It prevents unjust harm to innocent individuals. All civilized people adhere to it.

Or do they? Recently we have seen a number of instances of due process violations, carried out by the American government, leading to deportation and imprisonment. According to the US constitution, these are criminal acts (“No person shall” etc.). Of course, those alleged to have committed these crimes must be afforded due process: they must be allowed to defend themselves in a court of law properly constituted. We don’t want to punish people for violating other people’s due process rights without according them due process. If they are found guilty, they should receive appropriate punishment—some sort of deprivation. If they deported or exiled someone without due process, that person being innocent of the alleged crime, it would appear reasonable to apply the same kind of punishment. It might well be decided that we don’t want violators of due process laws living among us, where they might repeat their crimes—thus they might be deported and exiled. Or we could imprison them and fine them. That is a matter of detail and general policy. But suppose that we did institute deportation laws against people who wrongfully deport others by ignoring their due process rights. Then we would be deporting government officials who illegally deported others. Suppose an official knowingly deported a suspected criminal to a prison in which he would likely be murdered, and he did this in clear violation of the accused’s due process rights—arresting him in the middle of the night and forcing him onto a plane destined for said prison. The prisoner is duly murdered, as the official knew would happen. One might think that such an action should receive severe punishment, perhaps punishment in kind. The official is tried in a court of law respecting full due process rights and found guilty. Surely, he must be held accountable for his actions. By this standard, government officials engaged in illegal deportations should be treated as criminals: they have broken the law, as enshrined in the US constitution. A court of law may accordingly find them criminally liable.

I imagine the above reasoning will appeal to many left-leaning people. But these people may not be prepared for a corollary of that reasoning, namely that it applies equally to what has come to be called cancelling. In cancelling, persons are deprived of liberty and property without due process of law (not life, though suicide may result from cancellation). They lose jobs, income, access to opportunities, social status, a happy life. They may become exiled within their society, or feel driven to go elsewhere. The punishments are substantial, and they are intended to be. They are treated as criminals, in effect. But there was no due process. There was no formal hearing, no opportunity to reply to accusations in a legal setting, no impartial judge or jury. The punishment was applied based on factors that would not stand up in a court of law (gossip, progressive politics, prejudice, malice). This is rule by extrajudicial means, and notoriously fallible. Anyone guilty of inflicting harm on a person whose due process rights have been violated is himself guilty of a crime—that is, of an act of injustice. A professional body, say, that deprives a person of normal professional opportunities without any attempt at due process is guilty of due process violation. In effect, it is criminal. The same is true of individuals that actively thwart the normal freedoms of accused people, e.g., the freedom to go to a conference or be considered for a job on their professional merits. Due process is essential to all punitive measures; you can’t proceed without it. If for any reason it cannot be carried out, no one is entitled to assume guilt: if you have not been found guilty of a crime by a judicial process, you cannot be deemed guilty. That is our system, and it is a very good system. But cancelling is rampant in our culture today, which means that violations of due process rights are rampant. The proper conclusion, then, is that the perpetrators of these illegal acts should be held accountable and duly punished. Particularly egregious instances should be punished severely: we might want to consider deportation, prison time, or heavy fines. This implies that large numbers of professors in universities are guilty of a crime, i.e., harming others without due process. In my own subject, philosophy, there are numerous individuals who are guilty of the crime in question: deportation would seem a reasonable option (or am I being too harsh?). Systematic organized efforts to destroy a person’s career, without even a hint of due process, should be treated severely: they violate basic principles of law and morality. I can think of dozens of people who, in a perfect world, should be exiled from polite society for committing the crime in question. But I would not endorse this punishment without due process: punishment for violations of due process should not violate due process. I would insist on this provision in an individual case: you won’t be found guilty of it unless a court of law (or some equivalent) has duly carried out the appropriate procedures. I would recommend firing anybody thus found guilty—it is a serious violation of the law. As things stand, I would say that most of the professors of philosophy in the United States are criminals. They deserve what they have inflicted on others, just as government officials who deport immigrants in violation of due process laws deserve legally sanctioned consequences. Violations of due process are violations of due process, no matter who carries them out. We cannot treat people as guilty unless they have been found guilty.[1]

[1] This is not the same as thinking they are guilty, or even being certain of it; they must be declared guilty by a properly appointed judicial body. It is really quite amazing that so many people are ready to abandon this basic principle of justice when it suits them to do so. We should insist on it especially in cases in which we are convinced that the accused party is guilty (compare free speech).

Share

Convergence, Truth, and History

Convergence, Truth, and History

We tend to converge on the truth. Independent investigators often arrive at the same truth because it is the truth. If investigators are not independent, their coinciding beliefs may well be explained by influence not truth: they have the same beliefs because of interpersonal contact. Convergence of belief in independent investigators is apt to be a sign of truth because otherwise it would be a coincidence. The probability of truth rises with the number of independent investigators agreeing in their beliefs. Not so with dependent investigators: influence can easily explain agreement.  There is agreement by way of objective truth and agreement by way of personal influence. If only one person arrives at a given belief, that doesn’t bode well for truth, no matter how much influence he or she may have. Belief by influence is no guarantee of truth; belief under conditions of independence is an indicator of truth—not infallible, but highly suggestive. Convergence goes with fact; lack of convergence goes with fiction, even if there is agreement by influence. This is the difference between a cult and a learned society—whether there is agreement under conditions of independence.

There is an analogue in evolutionary biology: convergent evolution versus inheritance. Some traits evolve multiple times quite independently; there is no inheritance relation between the animals sharing the trait—for example, locomotion and vision. Other traits evolve just once and are then passed on; these tend not to be so widespread—eye color, nose shape. Convergent traits tend to be good traits to have—that’s why they evolve separately. Inherited traits may or may not be good—sometimes they just hang around because they come with the territory. Convergent evolution is a sign of adaptive quality: eyesight would not evolve multiple times if it were not useful, indeed essential. A trait is objectively beneficial or it is not—but if not, it could still be widespread because of inheritance. Eyes are good things to have, but vestigial hairy skin may not be.

This distinction applies to the history of human thought: we have people independently discovering the same thing and people agreeing by virtue of influence. When people come to the same view independently, we tend to suppose that the view in question is likely to be true; when they do so by influence, we think this is less likely (though possible). If millions of people independently arrive at the opinion that the Eiffel tower is tall, we think this is because it is tall; but if the members of a religious cult all have the same opinion, under conditions of influence, we don’t jump to the conclusion that it must be true. This is obvious and uncontroversial. But if we apply it to the actual history of human thought, we notice some interesting facts. On the one hand, there are many instances of convergence: in physics, astronomy, chemistry, biology, and even philosophy. I won’t rehearse all of this, except to remind you of Descartes, Galileo, and Newton in physics and Darwin and Wallace in biology. The same truths were independently discovered, thus adding credence to these discoveries; it wasn’t just a solitary individual who magically hit upon the truth by unrepeatable genius. In philosophy Russell and Moore converged in their opposition to Hegel, as Russell and Frege converged in their logicism. But in certain cases, this was not so—we had the phenomenon of the solitary genius who exercised massive influence. No one else had the same ideas, such was their singular genius—though they had their followers, disciples, and acolytes. Three such thinkers stand out in recent times: Freud, Einstein, and Wittgenstein. Singlehandedly, they revolutionized their subject—they alone arrived at the truth about their respective domains of interest (allegedly: see below). No one else came close: there was no Wallace lurking in the wings. Hence, they are regarded as true geniuses, as less gifted thinkers are not (including Russell and Frege). There was no convergence only influence: many people accepted their theories, though no one else came up with them independently. In this they exceeded their intellectual predecessors, such as those listed above. They saw farther than any other man. But shouldn’t this make us suspicious? How come only they had the brain power to make the discoveries they made? Other people came to accept their theories, but no one else anticipated them, came up with them on their own. Hmmm. What made them stand apart from the rest of the human race? How come the truth spoke only to them? Plenty of other people had their level of intelligence and yet did not happen upon the truths they revealed. That is what we have been encouraged to believe—the story of the lone but influential genius.

The trouble is that their theories (two of them anyway) are now discredited, to one degree or another. Here I have to put my cards on the table: I don’t think any of them spoke the truth, the whole truth, and nothing but the truth. That’s why there was no independent convergence. If they were onto the truth, someone else wouldhave been too; but no one else was, so they were not onto the truth. They may have been onto some truths, but the main body of their work was not true. This is why there is something cultlike about their following: it relies on influence not independent discovery. I think that Freudian psychoanalysis is mainly false or highly dubious, as do many others; the same is true of Wittgenstein’s Tractatus and Investigations; and many have been skeptical of Einstein’s special and general theories of relativity. My aim here is not to defend these opinions (controversial as they are), but merely to point out that the theories in question do not have the support provided by convergent discovery. It isn’t as if other people quite independently came to the same conclusions, which you might expect if those conclusions were true; rather, they were propounded by one man and gained traction and influence. It is notable that they are not easy to grasp (unlike the theory of evolution by natural selection): they feel obscure and arcane, rather startling, contrary to common sense. You are expected to take them on trust, though bits of evidence are dutifully provided. They are creeds, dogmas, pronouncements. Thus, a cult surrounds their progenitors—they are seen as sacred figures. Their images appear on T shirts. We can’t hope to understand them in their full profundity. Wittgenstein, indeed, inspired twocultlike followings. I could say much the same about certain proponents of quantum theory residing in Copenhagen, as well as certain philosophers. Did anyone else ever come up with Quine’s distinctive doctrines, or David Lewis’s? No, they were singular figures: the indeterminacy of translation, possible worlds realism. Both doctrines elicited “incredulous stares” (as Lewis famously remarked)–that is, no one else thought one day, “You know, meaning is a myth” or “Actually, actuality is no more real than non-actuality”. We don’t find independent thinkers happily converging on these startling conclusions; indeed, Quine and Lewis were virtually alone in confidently propounding them. Nor did colleagues say to themselves, upon hearing these doctrines, “Good heavens, he’s right, I see it now!”. It wasn’t like Darwin, Wallace, and their scientific followers (Huxley, Haldane). In these cases, we don’t have the comfort of convergence; instead, we have an instance of influence emanating from a single source.

We can ask our question about other areas of human life: ethics, politics, art, music, clothes, religion, literature. When do we have convergence and when do we have influence?  Is utilitarianism something that ethicists independently converge on or is it a matter of influence? I would say we have a good deal of convergence, combined with some degree of influence. Isn’t this something that people are likely to converge on, given its evident correctness (at least as a large part of morality)? Democracy in politics is the same—it’s obviously a pretty sound idea that any reflective person could come up with. Art and music are different: here there is no obvious truth for them to converge on—they are more a matter of free creation. The idea of the solitary artistic or musical genius is not merely a piece of airy romanticism (though influence also plays a role). Clothes are a mixed bag, being both practical and aesthetic: trousers, convergence; flared trousers, influence. Utility and fashion operate by different rules. Religions may converge from different places onto the same core ideas—a divine being or beings, priests, collective worship—but they may also propagate by causal influence. The former are more likely to be solidly based than the latter, being more reflective of general human nature. In the case of literature, influence will dominate, but literary forms may be independently arrived at in virtue of their inherent properties (the novel form is clearly good for telling long involved stories). None of these cases will be simple and straightforward, and it may be difficult to discern what is what, but the distinction holds in these areas too. The more objective and universal something is the more likely it will be independently converged upon; the more limited and local the more subject to influence. Memes are more local than facts of nature. Independent convergence will be a sign of veracity or lasting value, though not an infallible sign.

The general drift of these reflections is to throw cold water on the idea of the solitary singular genius, especially in scientific pursuits, including philosophy. If something is true, it is going to be discovered by several individuals, as has frequently happened in the history of human thought. In cases in which someone is hailed as a solitary genius, singlehandedly producing a new idea, we should be on the lookout for error; people just don’t differ that greatly intellectually (Einstein’s IQ was not higher than that of other physicists). When an idea’s time is ripe, it is likely that several minds will latch onto it; if only one mind does, we are likely to be in the land of creative fiction. It may be genius, but it is not true genius.[1]

[1] One might formulate a law of discovery: All true ideas are independently discoverable. No truth is such that only this individual could discover it. If Freud were right, he would have rivals in discovery—in other parts of the world, on other planets. If Wittgenstein were right, there would be a twin Wittgenstein somewhere saying the same thing. If we are resistant to these possibilities, that can only be because we sense they were not right. Darwin is not essential to Darwinism, but Freud strikes us as essential to Freudianism; and Wittgenstein to the doctrines of the Tractatus and Investigations—as no one but James Joyce could have written Ulysses. Berkeley is an interesting case: why does he have no co-discoverers? Would someone else have come up with the same ideas eventually? Is his work really a work of fiction? It does seem like an inherently singular vision. Could anyone else have arrived at the achievements of Shakespeare? By contrast, scientific truth always admits of independent discovery by a plurality of individuals. If Einstein had never lived, would someone else have come up with his theories? I rather doubt it.

Share

Jim and Me

Jim and Me

I was over at the tennis wall at the Biltmore yesterday, as I frequently am. My pal Jim was there, a retired tennis pro. He told me he had just turned 78 and was working on hitting his forehand from shoulder height; he liked to learn new things. He demonstrated the technique and indeed was hitting it quite nicely that way. He then showed me the same stroke on the backhand side, which is appreciably more difficult. It’s an important skill to have because the ball does not always obligingly come in at waist height, but it takes a lot of practice and strength building. I said I was still working on my left hand, as I have been for the last two years. I told him I’d just turned 75. Actually, my left hand has steadily improved, to the point that I can now hit forehands lefthanded fairly well. I then remarked that I had some pain in my right heel that was hampering my movement; this was caused by my skateboard hitting some vegetation on the road that made the wheels stop rotating and pitched me forward. My feet came down quite heavily and bruised the heel (it’s much better today). He gave me some advice on what to do about it. We carried on hitting.

Share

Kings and Queens

Kings and Queens

The President was in his bathroom fantasizing about torturing his political enemies. They clearly deserved it. It was a habit of his; he meant no harm by it (or not much). He stood up and his ample buttocks were reflected in the gold plate of his Presidential toilet. He gazed at his pudgy polluted face in the mirror and told himself he looked very handsome today—forceful, masculine, yet oddly pretty. Many people said so and they should know, despite the losers and haters that found him repulsive. He flashed his high-price implants and smoothed his carefully arranged hair. He had to look good today. He had a press conference at noon: it was sure to draw a big crowd, it always did. He felt a mixture of dread and jubilance: dread because of all the obvious lies he would have to tell, jubilance because he prided himself on his lying prowess (no one ever in the history of the world was a better liar than the President). Journalists were all dumb anyway, especially the intelligent ones. That young girl reporter from CNN was always trying to trip him up by asking him factual questions, but he knew how to deal with her—just pivot to how bad her network was. He must use that word more often, pivot, because it sounded smart and decisive–he was one of the great pivoters in history. His pivoting was sorely underrated.

The President walked into his large kitchen area and saw his wife sipping tea and reading a women’s magazine. She looked up briefly and then down again. As so often, he reminded himself that she was reckoned one of the world’s top women; not many women could compete with her regarding top woman. She had won several beauty contests in Central Europe and every man wanted to sleep with her (even his own sons, it was sad). He didn’t know what to say so he said, “Have you seen the polls today?” In a heavy accent she replied, “I never look polls, I don’t trust them”. “We are way up in Indiana”, he noted with satisfaction, noticing her eye makeup. He thought she was looking too skinny, as if she wasn’t given enough to eat. “You should eat more,” he advised, but she wasn’t listening to him.

In his Presidential office he was surrounded by his sir-men and yes-women. These were all good people, great people, they love our country. But he was suspicious of all of them: they were all out to get him, rob him, undermine his authority. Some of the men were aggressively taller than him, and not all the women (scandalously) wanted to sleep with him. Still, he could always just get rid of them. He was incredibly good at getting rid of people, famous for it—firing them, deporting them, imprisoning them. They should really be grateful he wasn’t having them shot. Shooting people was one of things he regularly fantasized about. Criminals should be shot, also treasonous generals, and people who don’t love our country. He could shoot anyone and no one would object—he was that popular. But he was a good man, a decent man, and wouldn’t shoot someone for no reason. There had to be a reason, like shoplifting or supporting terrorist organizations. He had been shot at himself, probably by his political opponents, so it was only fair that he should shoot back. It was common sense: they shoot at you so you shoot back—except you don’t miss, because you are a great shot. That was how the world worked. He smiled to himself, if you can call it that. He ordered a coke and wondered who he would call on the Presidential phone. Perhaps he should catch up on cable news, which he dominated like no one in history.

The Presidential press conference was like a rock concert, with him as the star turn. All he had to do was scowl and wave his hands. Occasionally he made sounds with his tight little circle of a mouth. His voice was his main weapon (he actually didn’t know how to shoot): it went from a nasty rasp, to a buttery bleat. And he had the best words: the best insults and put-downs, the classiest vocabulary. Sure, he could cuss like a world-class cusser, but he also knew how to talk civilized. He was from Queens after all, where people are people, not so-called elites and low-ratings losers. With regard to the speaking, he was in a class of his own—sort of a Shakespeare, you could say, but without the long words that nobody knows anyway. He opened the proceedings by congratulating one of his great people, the Secretary of Offence, who had recently succeeded in deporting dozens of people who don’t love our country, several of them minors. The girl from CNN, though, wanted to make trouble, asking her nasty questions. She asked if any of the deported people had broken any laws. The President had a ready answer to this unpatriotic question: “I am the law,” he stated. “The law is my decision, because I was elected by the people, not you and your failing network”. This was a decisive put-down and the President moved on to talk about dress codes in the halls of government. Another reporter asked him about his policy towards Europe, to which he loftily replied, “I’ve never heard of it”, eliciting a loud guffaw of support from his allies and enablers. He also deftly parried questions about breaches of national security caused by top secret information being revealed on the Jeff Regan show, observing that the information hadn’t yet led to any actual deaths of civilians. He felt he had turned in a characteristically stellar performance, marred only by failing to insult the lead reporter from the Times, who he particularly hated.

The President had received a beautiful invitation from the president of England, whose name he couldn’t quite remember, to come and visit the king of England. This idea appealed to him because he felt it was only right for him to hang out with the top royals in the world. He actually despised the king of England: he couldn’t understand a word he said, the king was pro the environment, and his women were not among the top women of the world. Still, he could lord it over this so-called king (wasn’t he once just a prince?) as the most powerful person in the most powerful country of the entire universe. Plus, he had a couple of inches on the king and he drew bigger crowds (the king of England didn’t even have rallies). It was in this frame of mind, if we may call it that, that he turned up at the palace, motorcaded to the hilt. The king was drably dressed but wearing a crown. This immediately put the President on edge—where was his crown! He felt that the king was trying to insult him, diminish him in the eyes of the world. He asked the king how much he would take for the crown, while squeezing his hand, but the king affably replied that the crown wasn’t for sale (it was a hair loom or something). These Americans, with their odd sense of humor, the king mused. Anyway, with that gaffe smoothed over, they went on to a royal banquet in which the President tried to eat food that his gut wasn’t remotely familiar with. He didn’t actually vomit, but his belly let him know it wasn’t happy. He suavely remarked to the king that he was more of a burger and fries man himself, chuckling urbanely. The king nodded, smiled, and carried on talking to the person next to him. The man from Queens was hobnobbing with an actual King—how cool was that? The President felt he had made a great impression, which was no doubt true.

It was only discovered later that the President had arranged with his secret service agents to steal the crown from the king of England. It was surprisingly easy, the king not suspecting that the President might have designs on his crown. It ended up in the Presidential bathroom, where the President could happily gaze at it and revel in his Presidential power. Naturally, the theft caused a diplomatic incident, in which the President was correctly accused of stealing the king of England’s crown. The President tried to brush it off by saying the king should be more careful and anyway he was thinking of annexing Great Britain. He seemed genuinely bemused when this didn’t go down well, at home or abroad. All his bluster about royal losers, sleazeballs, and crooks didn’t turn the tide of public opinion; he was held accountable for the act of Robbery of a Royal Appendage. It didn’t help matters when he loudly asserted that the king of England was the worst king England had ever had, ever. Even his most loyal supporters found this a little hard to stomach and could see in what direction the wind was blowing. He had to make a plea bargain to step down instead of serving hard time. He thereupon sank into oblivion and was never heard from again. He never did understand quite what had happened.

Share

Am I an Analytical Philosopher?

Am I an Analytical Philosopher?

The question is not easy to answer. On the one hand, I have written extensively on topics not usually covered in the analytic tradition typified by Frege, Russell, Wittgenstein, Moore, and their successors—disgust, good and evil in literature, sport, mind manipulation, Shakespeare, dreams, movies, the hand. I happen to have studied Husserl and Sartre, I have a background in psychology, and have written two novels. My prose style is more literary than your typical analytical type, as well as more demotic (I also write pop songs, which I don’t think Frege ever did). I range outside the analytical canon and I do so in a more congenial style—you might think of me as assertively non-analytical (even anti-analytical).  And you would not be wrong: I am self-consciously rebelling against the norms of respectable analytical academic philosophy. This entire blog is in many ways a rejection of the prevailing norms and practices. I am thus seen as an intellectual nonconformist, a “maverick”, a free-range performer. And yet, on the other hand, I am a staunch advocate of conceptual analysis as the proper method in philosophy–a throwback, a traditionalist.[1] I also believe that philosophy is a science in the strict sense; it is not one of the “humanities”.[2] I am all in favor of necessary and sufficient conditions, argumentative clarity, rigor, refutation. So, I appear to be a shape-shifter, a mongrel, a divided self.

But there is really no tension in me. I am just a wide-ranging philosopher who believes philosophy is analytical. I don’t believe that philosophy is all about language, or even about concepts. I think that the philosopher analyzes things, not concepts of things for their own sake: he or she analyzes things conceptually, i.e., by examining concepts. Concepts are the method not the subject. Philosophy is unlike empirical science in that it doesn’t do experiments in the lab or make observations in the field: it isn’t based on the five senses, particularly vision (you can have bad eyesight and be a good philosopher). There are really two senses to the phrase “analytical philosopher”: a certain tradition and a certain method. You can belong in the tradition founded by the likes of Frege, Russell, Wittgenstein, and Moore (among others) and be interested in what interested them (and only in that); or you can adopt and advocate a particular method of doing philosophy, which may be called “conceptual analysis”. This latter has nothing to do with avoiding science: nothing is to prevent you from studying science avidly and analyzing it. I myself am a scientist by training and inclination; I read a lot more science than philosophy (biology, physics, economics). I write about this stuff. I even think that philosophy is a branch of biology. I am amused by scientifically illiterate (or inexperienced) philosophers bowing down to the latest piece of methodologically dubious psychology; they should spend some time in the lab doing experiments themselves! In any case, I am an analytical philosopher in the method sense but not the tradition sense: I don’t endorse the kind of exclusivity of traditional analytical philosophy as to subject matter, but I do endorse carving out a place for philosophy distinct from empirical science (or literature for that matter). Philosophy is uniquely itself. I just cast the analytical net more widely. So, am I an analytical philosopher? In one sense no, in another yes.

Where do other professional philosophers fall in this respect? I think they are nearly all analytical in the second sense, even though they might repudiate the label. Wittgenstein is, early and late, so is Quine, so is Rorty; ditto Davidson, Kripke, Lewis, Strawson, Dummett, Fodor, Nagel, Rawls, and a great many others. All these people give conceptual arguments for their positions, and they attempt to tell us what certain things are without making empirical observations. Fodor, for example, gives a priori arguments for the language of thought (he is not a practicing scientist). Has Quine ever offered any experimental results proving that to be is to be the value of a variable? Good for them, I say. In fact, I can think of only one philosopher in my lifetime who was clearly not an analytical philosopher, and hence not really a philosopher at all—Richard Wollheim. That would have been far too conventional for Richard. Why do I say this? First, because his only real philosophical interest was in pictorial art; I never saw him take a serious interest in anything else (we were friends and colleagues for many years). Second, he approached everything via psychoanalysis (which he pronounced with a hard “p”), never conceptual analysis (he believed in psychoanalytical philosophy). I never heard him produce a counterexample or offer a set of necessary and sufficient conditions. And third, he was a deeply obscure, indeed obscurantist, writer, though a writer of great elegance. No one else writes like Richard Wollheim. Philosophically, he is unclassifiable—which is just the way he liked it. He alone is no analytical philosopher. His first book was on Bradley, an exceedingly unlikely choice for an Oxford-educated philosopher of his generation. He was not what you would call mainstream.[3]

[1] See my Truth by Analysis (2012).

[2] See my “The Science of Philosophy” in Metaphilosophy (2015).

[3] I cannot resist offering one Richard anecdote illustrating his deviations from the norm. He once called me on some departmental matter and I remarked ruefully that I was just going to the launderette. He immediately ejaculated, “Oh, I love the laundromat!” When we talked philosophy I hardly ever understood a word of what he was saying.

Share

An Audacious Solution to the Mind-Math Problem

An Audacious Solution to the Mind-Math Problem

The mind-math problem is the problem of explaining how the mind and mathematical reality manage to come together: how do numbers and geometric figures get to be apprehended by the mind? Suppose we adopt a Platonic view of mathematical reality—it consists of abstract objects, existing outside space and time, independent of the physical and mental worlds. Suppose too that we regard the mind as existing in space and time, concretely, embodied in the physical brain. How, then, can the mind make contact with the mathematical realm? There can be no interaction, no contact, no common ground. If the mathematical were mental (or physical), then the mind would have a chance of becoming acquainted with it; and if the mental were mathematical, some sort of communion would be possible. But the two belong to different worlds, almost different universes. We would not be surprised, on metaphysical grounds, to learn that the twain never meet, so ontologically remote are they; and yet they manifestly do meet, intimately so. For we have mathematical knowledge, mathematical perception (apprehension), and mathematical beliefs (and even desires). It seems easy for them to meet. But how is that possible, given that there is no causal connection? Physical objects cause our knowledge of them (we are told), but abstract objects can’t do that—they have no causal powers (we are told). This problem then leads to attempts to close the gap: we can reduce the mathematical objects to something closer to the mind (ideas, notation), or we can equip the mind with special non-causal faculties that permit quasi-mystical communion with the Platonic world (intangible telescopes etc.) Neither approach meets with universal acceptance, and indeed are generally acknowledged to be far-fetched and revisionary. We clearly have such knowledge, but we can’t fit it into our preconceived assumptions—Platonism about mathematical truth, naturalistic causalism about knowledge in general (propositional and objectual).[1]

What notion of causality is in play in these reflections? The kind derived from modern science (allegedly) and the kind derived from common sense (allegedly). We call this “mechanism”—causation by proximal contact, impact, bodies in motion touching each other. That kind of causation is clearly inapplicable to the math-mind relation. But mechanism has long been out of favor and now looks like common sense gone awry, ever since gravitational action at a distance became accepted as real. However, gravity is not the right model for mathematics either, because numbers and geometric forms don’t have mass and don’t exist in space either, according to Platonism. Still, might there not be a broader notion of causality that applies to the relation between math and the mind? In earlier papers[2], I have suggested as much: logical relations, particularly entailment, can be viewed as a species of causal relation. I won’t repeat the arguments here, but their relevance to the present issue is immediate: mathematical reality, construed Platonically, causes mathematical belief, in the extended sense of “cause”. Moreover, it causes the brain to be configured in a certain way—that is, it is (part of) the causal explanation of the brain’s structure.[3] It is because numbers and figures are a certain way that people have the mental and brain states that they have. This is a far cry from mechanistic causation by proximate interaction; it is a sui generis type of causation or causal explanation. We can say that mathematical truth gives rise to mathematical knowledge, has it as a consequence. It is, indeed, hard to see how this could not be so: for it is scarcely conceivable that mathematical truth plays no role in the etiology of mathematical knowledge, as if it had nothing to do with what people believe mathematically. It is because 2 + 2 = 4 that people believe that proposition, to put it crudely. How could they come to know it by some other means, such as sensory perception of material objects? There must be some sort of causal generative connection here. The numbers must be exerting some sort of “force” that produces beliefs about them, though not any physical force with which we are familiar. We might call it the “mathematical force” just to have a name (or “mathemity” to mimic “gravity”). It is defined as whatever it is about numbers that makes them able to command belief—their propensity to invite belief. Once we apprehend them, they induce us to form certain beliefs about them and not others.

It may be said that this is all very mysterious and should not be entertained for that reason. But this is a bad argument: even mechanical causation is mysterious, as we have known since Hume. All causation is mysterious, but it doesn’t follow that it doesn’t exist. Thus, the way is open to accepting mysterious causal Platonism (we already accept mysterious Cartesian causal mechanism). This theory enables us to respond simply to the initial problem: there is no incompatibility between mathematical Platonism and a broadly causal conception of knowledge. We just need to jettison old-fashioned ideas about what causation can be. True, the result is pretty mysterious, but no more so than causation in general; and isn’t it really quite commonsensical, given that we have no trouble with the proposition that we believe what we do mathematically because of how things are mathematically?  It certainly isn’t because of anything else (sensations of color, aches and pains, the sound of number words). I will even venture to suggest that the ability of this view of causation to solve the mind-math problem, which has hitherto proved intractable, puts the underlying metaphysics of causation in a stronger light.[4]

[1] Paul Benacerraf’s well-known paper “Mathematical Truth” is the locus classicus here, but the problem is as old as Plato.

[2] See my “A New Metaphysics”, “Causal and Logical Relations”, and “Because”.

[3] This causal explanation may trace back to genetic selection: the genes make the brain they do because of certain mathematical truths, thus installing innate configurations. That is, we have basic mathematical knowledge innately in virtue of mathematical facts; similarly for basic physical knowledge.

[4] We could take a similar view of ethical knowledge: ethical facts cause ethical belief, though not in the mechanistic sense but the “giving rise to” sense. We have the ethical beliefs we do because of the ethical facts; these are the origin of the causal chains that lead up to ethical belief (at least some of the time). It is the badness of pain that makes me think that pain is bad, not (say) the emotion that pain produces in me or what people tell me. The explanation of ethical belief involves ethical truth (though other factors can come into it)—sometimes, if not always.

Share

Impersonating Trump

Impersonating Trump

Impersonating Trump has become a growth industry. I’d like to see a Trump AI robot. But have you noticed that Trump officials and followers are beginning to impersonate him—his bluster, his insults, his nastiness? I wonder how much his personality and style have seeped into the culture—here and abroad. One Donald Trump is bad enough, but millions?

Share

Action and Trying

Action and Trying

Davidson once memorably said, “We never do more than move our bodies; the rest is up to nature”. This aphorism has the sound of an illuminating truism, but does it stand up to critical examination? Suppose you are suffering from paralysis, total or partial, following an accident. Your physiotherapist asks you to try to move your arm and you find you can’t move it: wouldn’t you think, “I’m trying, but I’m not succeeding; nature won’t let me”? Your body is part of nature and it is not cooperating, so you can’t act; isn’t this just like trying to lift a weight that is too heavy for you? Your act of trying can’t overcome the dictates of nature, whether your own body or the world outside it. Wouldn’t it be more accurate to say, “We never do more than try to do things; the rest is up to nature”? There is our will on the one hand and nature on the other, i.e., the world outside our will. When you lift your arm do you move the bone in your arm but not the clothes on it—is the latter part of nature but not the former? What if your arm is partly prosthetic? What if you always carry a gun in your hand? The distinction between body and nature is artificial, but the distinction between will and nature is not (of course, the will is also part of nature). The correct aphorism is: We never do more than try to do things; the rest is beyond our control. That is, we only have direct control over our will; the rest is a matter of whether the world beyond cooperates. Adopting the terminology of basic actions, we can say that our basic actions are acts of trying (willing); anything else is non-basic, i.e., consequences of acts of trying. The body thus has no privileged position in the philosophy of action. What indeed is the body: does it include the hair on the body, sweat, clothing, tools, machines, other people? Nature and the body merge, with the will attempting to manipulate them. All we really, basically, do is will things; the rest is out of our control—that’s a matter of whether nature chooses to go along with our will. It’s not the body-nature division that matters (not that this is a real distinction); it’s the will-world division. So it would appear.

Is it true that whenever we act we try to move our bodies? And is that really all we try to do? Neither proposition is correct. Surely, we try to do many things beyond moving our bodies—we try to fix things, go places, have careers, find love. The intentionality of trying is wide in scope and not limited to the body’s movements. And do we always try to move our bodies when we try to do these other things? Do we try to contract our muscles, or activate our efferent nerves? We do not—we may not even think about these things. Our mind is not concentrated on our body: you might be trying to score a goal, but you don’t think about your leg at all—you take that for granted. Your brain causes your leg to move thus-and-so, but you don’t give it a second thought—you have your mind set on the goal and goalie. Trying to move your leg in a certain way may hinder what you trying to achieve—there are only so many things you can think about at the same time. Your attention is on the goal not the leg. So, it isn’t that trying to move the body is basic and essential to acting; trying is far more protean and plastic for that to be true. Trying goes with intention and desire, which generally concern ends not means. In principle, you could try without even having a body: you could be a disembodied mind that is suitably causally connected to the external world (isn’t that what God is supposed to be?). What is essential to action is the will (capacity to try) and a causal link to external reality; the body is just one means for getting things done, not a sine qua non. We never do more than will; the rest is up to causality. The basic actions are acts of trying. The body is not at the center of the philosophy of action—it is not even an essential component of the subject. So it would appear.

It might be objected, however, that we are underestimating the role of the brain. Doesn’t the brain (in terrestrial animals) cause and control the body, so it must be occupied about the body, even if the person (or other agent) need not pay the body much attention? This must be conceded: the brain has to have the wherewithal to initiate movements of the body, and this must be detailed and representational. The brain needs a “body image” in order to go about its business. And it must act with that body image “in mind”; it can’t ignore the body as the conscious agent can. Isn’t the brain then part of the philosophy of action of embodied creatures such as ourselves? If so, a philosophy of action that focuses exclusively on what the agent tries to do is incomplete; we need to add what the brain is up to as well. A human action (as well as the actions of other animals) consists of a mental act of trying and a physical act of brain initiation. The combination is the true nature of action as we find it: agent trying plus brain stimulation. In creatures lacking mind (think worms) action is just brain stimulation—efferent nerves and muscles—and such creatures can be said to act equally. Conscious trying is superimposed on this basis, and is presumably a later evolutionary development. Both aspects need to be acknowledged in an adequate philosophy of action—we thus have a double aspect theory of action. Indeed, we have a double ontology theory, since the act of trying is not to be identified with the brain’s action of causing bodily movements. The action of raising your arm consists of two actions—your act of trying to raise your arm and your brain’s action of innervating the relevant muscles. Both need to be described and explained, and integrated. A good philosophy of action has a mental component and a physical component, because two things are involved; an action is both of them together not one separately. The motor part of the brain confines itself to the body, down to the fine details; the conscious agent is more concerned with ends and results and has little time for the physiological machinery. What we call action straddles these two domains and it would be a distortion to limit it to one of them. The agent and the brain are both centrally implicated.[1]

[1] The work of Davidson on action and O’Shaughnessy on the will form the background to this paper. I am adding the brain as an essential component to the story. Both philosophers were too monistic, though in possession of important truths. Human action is Janus-faced. (Let me add that the philosophy of action is a remarkably tricky subject.)

Share