Trumps

Trumps

How should we cope with Trump’s looming presidency? I think we need a conceptual switch: stop thinking of Trump as a man and start thinking of him instead as a disease. Trump is an epidemic that is sweeping the land. This disease has infected the minds of a great many people: they are suffering from trumps (like mumps). There is no vaccine for it; it spreads through the air (and airwaves); there is no known cure. It is borne by memes—all those tics and slogans emanating from the engorged Trump virus. The Republican party is completely sick with it. Ordinary people are coughing and sneezing with it. The media are awash in it, staggering along, bedridden with the disease. It affects every demographic, not just old people. So far children are immune, but doctors are warning of the spread to them too. The disease is fatal to good sense and decent morals—they wither in its onslaught. Women are as vulnerable as men. Bodies are snatched by it, minds captured. Even those not infected are afflicted with the disease: they see it all around them and wonder what damage it will wreak. It evokes fear as well as insanity. They could make a horror movie out of this malady. Covid has nothing on Trump. Variants are springing up already, mutations, new pathogens. The disease is on the march. Modern medicine is no match for it. Politics as we used to know it is the wrong model; this is politics by disease transmission not persuasion. It turns out that the human immune system cannot handle Trumps. We can only hope that we develop a natural resistance to it in time. Scientists are working feverishly to produce an effective antibiotic. So far, only humor and ridicule have shown any clinical promise. Some researchers are recommending a change of diet—no more bingeing on social media and cable news junk food.  Maybe the whole educational system has to be reinvented.

Share

Academic Freedom and Sex

Academic Freedom and Sex

I apologize for discussing such a sordid subject. I don’t mean the subject of sex; I mean the threats to academic freedom this subject invites—from the right and the left. Nominally, we are all in favor of academic freedom, but that tolerance is apt to waver when sex is the topic. Let’s consider a hypothetical case. A professor finds himself interested in the relationship between sexual fantasy and creativity: he wonders whether there might be a correlation, even a causal connection. Sexual fantasy can be creative and this might be connected to non-sexual creativity—might it be the origin of creativity in humans? He needs data. He convenes a seminar in which the participants keep a diary of their sexual fantasies to be rated for creativity; then he tries to correlate this with other measures of creativity, say poetic inventiveness. It doesn’t matter for our purposes whether this is a wacky idea; suppose it isn’t—suppose it has some truth to it. Accordingly, members of the seminar, male and female, present their sexual fantasies for group discussion and evaluation. This is all voluntary; everyone is over the age of twenty-one; nothing untoward happens. Data is gathered; the hypothesis is tested; it turns out there is a strong correlation. Do you think the university administration is going to be happy with all this? Do you think the local feminists, hot for cases of “sexual harassment” etc., will raise no objection? I doubt it. The press will be eager to cover it, questions will be asked, fingers will be pointed. The professor might find himself in a heap of trouble. But isn’t this a classic case of infringement of academic freedom? What if a participant complains to the chairman, upset that her fantasies have been deemed uncreative? She feels humiliated and put down. She doesn’t feel “safe”. What should we say about all this?

The point I want to make is that the seminar needs a special kind of protection, because it is a special kind of proceeding. It is an academic proceeding. It requires a specific kind of mind-set, which we might describe as disinterested, detached, scientific, objective, unemotional, impersonal, pure, intellectual. This mind-set views sex and sexual fantasy as a natural fact like any other natural fact. It seeks to understand that fact–its structure, its causes and effects. It is the opposite of the pornographic mind-set. It may be a mind-set quite alien to the majority of people—those who are not academics. These people may therefore not understand it, suspect it, seek to curtail it. Thus, it needs to be protected—because it can easily come under threat. The atmosphere in a room like my hypothetical seminar room is unusual; it tends to be dry, analytical, humorless, serious. If you read a transcript of it, you might come away wondering what the hell these people were up to, the professor in particular. It would read a lot like literary porn, especially when taken out of context. The words alone might condemn it in your eyes. But you would be wrong: it is an academic exercise, a scientific inquiry. It needs special protection because it is easily misunderstood, unfamiliar to many people. It isn’t barroom chatter, or therapy speak, or outrageous speech for its own sake. It is a unique kind of discourse, calling for a distinctive type of mental attitude. It isn’t for everyone. The context and purpose make all the difference in the world. It can’t be judged by snippets of dialogue, words employed, acts described. Academic freedom is the freedom to engage in this kind of mind-set with other consenting adults.[1]

[1] I have some personal experience of the phenomenon described here, having written Mindfucking (2008), a book on disgust (The Meaning of Disgust, 2011), a treatise on the hand (Prehension, 2017) in which sex is briefly mentioned, and a pair of novels with sex scenes in them (not easy to write). It’s amazing the reaction such works can evoke. And let’s not forget famous novels by James Joyce, D.H. Lawrence, and Vladimir Nabokov (not to mention Sigmund Freud, Bertrand Russell, Masters and Johnson, et al).

Share

Does Ethics Have a History?

Does Ethics Have a History?

It might be thought obvious that it does: hasn’t ethical thought changed over time? What we used to find morally acceptable we now find abhorrent. But be careful: are you distinguishing right and wrong from our thought about right and wrong? Ethical thought certainly has a history, but does it follow that ethical truthdoes? A great many things have a history—the things themselves not just our knowledge of them: human beings, animal species, planets, stars, galaxies, towns, countries, etc. Thought about these things has a history, and so do the things thought about: they came into existence at a certain time, and changed over time, sometimes perishing in due course. They were caught up in the causal web we call history. But is that true of everything—what about logic, mathematics, natural laws, space, time, the empty set, Platonic universals (if there are any)? Apparently not. Does the law of noncontradiction have a history, or the law of gravity, or the number 2, or time itself? If they did have a history, it would be sensible to ask when they came to exist, how they have changed, when they might perish, what caused them—but it doesn’t. They have no history—how could time have come into existence at a particular time? They are ahistorical.[1] Thought about them has a history, but they don’t have a history. Evidently, we can have historically situated knowledge about ahistorical things, as well as historical things. Might the same be true of ethics?

We think it is true that happiness is better than misery, that we should keep our promises, that murder is wrong, that pain is undesirable; but when did these things become true? Not when we discovered them to be true, or accepted them as true, because we were recognizing an antecedent fact. It would be absurd to say that promise-keeping became obligatory on a certain date; that would preclude us from saying that an earlier act of promise-breaking was morally wrong. Nor would it be appropriate to ask what caused promise-keeping to become obligatory, as we might ask what caused a certain war. Did pain become undesirable only when people first formulated that thought? Pain has always been undesirable, a bad thing, something to avoid causing if at all possible. It would be absurd to ask whether pain became bad gradually or suddenly, before the holidays or after. Justice was always good; it didn’t become good at a certain period of history—though it may well have been recognized as good at a certain period. Ethical truth, like logical truth, has no history: there was no time at which the law of noncontradiction didn’t hold, only to come to hold later; and there was no time at which murder was not wrong, only later coming to be wrong. Wrongness is not something that can come and go with the vicissitudes of history, like prosperity and the plague. In a slogan: moral values have no history. We can be wrong about them, to be sure, but that doesn’t imply that they change with our attitudes; there is no alteration in their nature. They don’t grow old or fall apart or get bigger. They are not subject to the law of causation. So, ethics has no history, if we mean ethical value itself; of course, ethical thought and practice have a history, a checkered one. Logic has no history either, if by logic we mean logical laws themselves; of course, logical thought has a history, part of the history of the human species. When people write books called things like A History of Ethics, we must bear in mind that the title is misleading; the author means A History of Ethical Thought. The word “ethics” is systematically ambiguous. Keeping the two meanings apart aids clarity.[2]

[1] Suppose we ask whether identity has a history—that relation in which Leibniz was so interested. The question is bizarre: are we eager to be told where it lived and when? Do we expect to learn the date on which identity became subject to Leibniz’s law (sometime in the seventeenth century, or possibly at the time of Plato?). Identity undergoes no maturation process, suffers no setbacks, finally asserting its dominion over all of reality. There cannot be a history of identity. But there can be a history of human thought about identity, taking in Leibniz, Frege, and Kripke. I am asking whether goodness is similar to identity: are the truths about goodness and identity truths of history or are they essentially ahistorical?

[2] If you are one of those benighted souls who thinks it’s smart to define right and wrong in terms of beliefs about right and wrong, then it will follow trivially that ethics (the subject-matter) has a history—at which point you might want to consider contraposing. This paper is intended for those who see the distinction and are confused by the phrase “history of ethics”. Ethical reality itself has no history; it undergoes no change. To put it differently, the only history that ethical values have is a history of human (possibly also animal) psychology—when and where ethical truths came to be perceived, accepted, and taught.

Share

Confidence

Confidence

I came to America with a very positive attitude towards its people (naively optimistic, you might say). It wasn’t long before that confidence was shaken by personal experience. My confidence steadily eroded over the years (I name no names) with professional philosophers the main culprits. It culminated in my experiences of a decade ago. I watched American philosophy try to destroy itself, and do a pretty good job. The first election of Trump dealt it (my confidence) a severe further blow. His recent triumph has dented it beyond repair. To me it has been a growing disillusionment. I see it all as a pattern, a predictable decline, enthusiastically executed. All that remains is for me to watch from afar as America proceeds to destroy itself—for no reason whatever. I will gain grim satisfaction from this, and some gallows amusement. This is one immigrant’s story.

Share

Consciousness and Logical Form

Consciousness and Logical Form

Consider the sentence “It is a necessary truth that for all conscious beings there is something it is like to be that being”. If we render this in standard logical notation, we have two quantifiers and a modal operator. These generate scope distinctions and hence alternative readings of the sentence—nine in all. I will be concerned mainly with two of these, corresponding to the quantifiers: we can put the universal quantifier before the existential quantifier or vice versa. We thus obtain quantified sentences analogous to “Everyone loves someone” and “Someone is loved by everyone”: the former allows for different people to be loved, while the latter asserts that a single person is uniquely lucky in love. In the case of my sentence of interest, we might be saying either that every conscious being has some type of what-it’s-likeness (not necessarily the same type) or that there is one type of what-it’s-likeness that every conscious being has. Does the “likeness” property vary from case to case or is it constant? As commonly interpreted, the sentence is taken to mean the former thing: bats and humans have different ways that it’s like for them. That’s why we don’t know what it’s like to be a bat—because bats are different from us in this respect. Consciousness varies in its form from case to case, and what is true is that every such form exemplifies a variation in the likeness property. There is nothing in common between all the different forms of consciousness, except that there is something it is like to be conscious—not necessarily the same thing. It might indeed be maintained that there is nothing it is like that is shared by all forms of consciousness. In the jargon, there is no universally shared qualia (or quale), though there are many types of qualia. The situation, it may be said, resembles the concept of a game—there is a family resemblance between the two cases of family resemblance. Just as there is no one thing that all games have in common, so there is no one thing that all beings there is something it is like to be have in common—save that they are all games or all cases of what-it’s-likeness (in effect, this makes the concept of consciousness itself a family resemblance concept). There are many ways of being conscious but no single property common to them all—no single property that they all share. That is, there is no single feature of what-it’s-likeness that all conscious beings possess—no single subjective fact, just a motley of different types of subjective fact. There is, for example, no subjective fact in common between bat echolocation experience and human visual experience—compare chess and rugby.  More generally, there is no property of consciousness that applies universally to all cases of consciousness and that constitutes what it is like to have consciousness in all cases. There is just an irreducible plurality of ways of being conscious.

Is this true? I doubt it. Certainly, many philosophers have supposed otherwise: Brentano, Husserl, and Sartre to name three. It has been supposed that intentionality is the common thread, or subject-dependence (all consciousness has a subject of consciousness), or nothingness. These properties are held to be phenomenological properties and they are supposed necessary and sufficient for consciousness. There is something it is like to possess intentionality, or to have a subject, or to be nothingness—something that all conscious beings share. The property is introspectable, evident from inside. So it is thought. I think there is something to each of these views, but I don’t think they capture the full reality. There is a common something that all cases of what-it’s-likeness share—sentience as such—but it is markedly elusive. If we acquired bat experience, we would notice it immediately—we would not hesitate to call the new experience a form of consciousness. We would notice the intentionality, to be sure, and the involvement of the subject (“this is mine”), and maybe the nothingness; but we would also recognize a shared dimension of phenomenological reality that is not quite captured by these descriptions. It wouldn’t be like seeing a game of chess one day while hitherto only seeing rugby and football. It would strike us that this is the same as what we have experienced before (though also different). But what is the name of this common factor? That’s where the question becomes difficult: we just want to blurt out, “But can’t you see, it’s the same!”—not alien, not unrecognizable, not belonging to another family of things altogether. We might want to call it this-ness or feeling-ness or my-ness or in-me-ness—while acknowledging the inadequacy of these terms. There issomething it is like to be any subject of what-it’s-likeness: there is something it’s like for there to be something (or other) it’s like. It’s like…this. It is as if we have only demonstrative knowledge but not descriptive knowledge—acquaintance not description. Feeling pain is like seeing red, smelling vinegar is like hearing a symphony—though the experiences are also extremely different. They are all readily classifiable as falling under the same concept by noticing a shared feature; though that feature is hard to pin down, like a fast-flitting butterfly that refuses to be caught. I know that all of my consciousness belongs together under that concept, despite its enormous variety, and I know it by being aware of a common feature. However, I find it difficult to say what that feature is—though I could show you by letting you into my consciousness. It isn’t cognitively closed to me—far from it—but it does resist descriptive encapsulation. It is obvious but inarticulable. It has no name like “seeing red” or “feeling pain”; it hovers at the edge of consciousness, just out of verbal reach. It can’t even be called “nothingness”.[1]

I think, then, that there are two things it is like to have any particular conscious experience: the specific nature of that experience (e.g., seeing red) and the general property common to all conscious experience (the qualia with no name). So, both orderings of the quantifiers yield a truth—a necessary truth indeed. Every possible experience has a specific what-it’s-likeness and a general what-it’s-likeness. Both problematically relate to the brain. Both are phenomenological. Both are essential properties of any given experience. That is the architecture of consciousness as such—the specific intertwined with the general. We do know what it is like for the bat to have the general property (for we have it too), though we don’t know the specific property. We know the general nature of any form of consciousness, no matter how alien, though not the nature of specific variations on it. We know what it is like to be a bat—it’s like this (pointing to my own consciousness)—but we also don’t know (because we have no specific form of consciousness that resembles the bat’s echolocation experience). What-it’s-likeness is a two-level affair (somewhat like species and genus). The logical form of consciousness is the specific subsumed under the general. Both are given, but one is more cognitively accessible than the other.[2]

[1] An idea it might be worth trying to develop is that consciousness is always attributive. This notion is akin to intentionality but it emphasizes the property of attributing qualities to things; consciousness isn’t just of things but also ascriptive to things. Pain is ascribed to a part of the body; color experience ascribes color qualities to external objects; thought ascribes properties to individuated objects. Consciousness is ceaselessly ascribing things to things. The bat’s echolocation experience attributes spatial properties to identified particulars, as our auditory experience attributes sounds to objects located in the surrounding world. Modes of attribution are the what-it’s-like features of conscious experience. And we are aware that this is what is going on whenever we are conscious—we are consciously attributive beings.

[2] If we assume that cognitive and linguistic capacities exist only when it is useful to them to exist, it becomes intelligible that we have no concept or word for the general property, but an ample supply of concepts and words for the specific properties. For there isn’t much point in a concept or word that represents a feature of consciousness that has no use in communication or thought: how would we set about using a word for the general property in question? It is useful to tell someone you are in pain, but not to express that abstract property instantiated by all conscious states—the higher-order property of what-it’s-likeness. That property exists at a level of generality that transcends our practical purposes, even our scientific purposes. We know it, but it’s not something we are naturally equipped to talk about.

Share

Language and Politics

Language and Politics

I remember it like yesterday—the day I first encountered pronoun mania. It was in London, the late Seventies, at a student party in the philosophy department of University College London, where I used to teach. A female (girl, woman) student told me she had enjoyed our tutorial on the analysis of knowledge but had one question for me: why did I keep saying “he” when I discussed analyses of knowledge? Was I forgetting that women (girls) often know things too? This struck me as a bizarre criticism, but I addressed myself to it, making what I thought were rather obvious points. No, I wasn’t forgetting that, but merely availing myself of the conventional means of expressing generality in the English language. To my surprise I made little headway with this newly minted zealot, who had no doubt heard this “criticism” from a female professor, or possibly read it in a feminist tract. I had been a feminist since the Sixties and needed no conversion, but this business with pronouns struck me as extreme, unhelpful, and perverse. Little did I know it would become feminist orthodoxy across the globe, taken as self-evidently correct.[1] I never imagined it would become a political touchstone, an article of faith and righteousness, a model for other politically motivated linguistic measures. I could not have foreseen that this seemingly mild piece of speech reform would become the seed of the ascendancy of right-wing politics—the reason left-leaning politicians fail to be elected, the reason the Democratic party in America lost the support of non-college-educated voters. Allow me to explain.

I will be blunt: the thinking behind the student’s question stems from a misguided and wacky theory of the relation between thought and language. We are familiar with this theory under the name “the Sapir-Whorf hypothesis”, but it is actually a general theory to the effect that language shapes (determines, constitutes) thought (knowledge, emotion, intention). Your mind is held to be a product of the language you speak; it has no other form. Conceptual schemes are composed of words. And words limit what your mind can grasp, biasing it, distorting it. Thus, when you say “he” to express generality you are tacitly assuming that everyone you are talking about is male. When I formulated a Gettier case using “he” I was actually thinking or assuming that all knowers are male. I was excluding female knowers from my thinking. The pronoun “he” is a masculine pronoun, so my meaning was masculine, so I must be thinking that knowers are always male, or at least making that assumption. If I retort that I was not thinking that and did not mean that, I will be sternly reminded that I am violating the Sapir-Whorf hypothesis, or some variant of it. But this is a terrible theory of the relation between conventional linguistic meaning and speaker meaning—between what words mean in themselves and what speakers mean in using them to perform speech acts. The word “he” has a masculine meaning, but when I used that word, I was expressing my speaker meaning, which was not marked masculine—I meant “he or she”, in effect. More generally, it is not the case that spoken language shapes or determines thought, even what is meant by speakers when they speak. I won’t go into the reasons for saying this; my point is that the objection raised by the student depends on a contestable theory of the relation between thought and language. I would argue that this false theory reflects behaviorist assumptions that deny the reality of the inner: language is what is outer and observable, so it must constitute whatever is real in our mental talk. This theory in turn happens to suit a capitalist ideology that views the human being as essentially a machine with no inner life, a useful tool in the production process—so that no ethical implications arise for the exploitation of workers in factories and the like. But that is another question; again, my point is that the theory has complex relations to economic and political positions. It is not a self-evident truth. In fact, it is pretty wild and implausible when impartially considered. It is characteristic of a range of far-out theories developed and popularized by social scientists (sic) working in universities and passed on to the general population. It is a philosophical theory, in the broad sense—speculative, controversial, almost certainly false. Think Freud and Skinner, Armstrong and Ryle. All the contortions engaged in by the grammar police involving “he” and “she” ultimately rely on a dubious theory invented by academics. This theory floats around universities and infects the minds of the educationally impressionable, eventually transmogrifying into politics and policy. But—and this is the practical political point—it is not absorbed by people who didn’t go to college and are never exposed to the theories prevalent there (and which change with the seasons). These theories are the province of the semi-educated—those with college degrees but not advanced degrees, roughly. Freud and Skinner were eventually demolished, but not before entering the minds of people unable to critically evaluate them for themselves. Similarly for the theories behind the pronoun mania that has swept campuses, boardrooms, and public services. Theoretically speaking, the pronouns of natural language are devices proper to language as a formal system; they are not determinative of the very structure of human thought. There is thus no need to reform ordinary grammar in order to protect thought from malign influences; we just need to recognize the distinction between linguistic meaning and speaker meaning. Words are our tools; we are not their puppets. We don’t need to police language in order to make our minds politically perfect. The concept of an “ideal language” is misguided and unnecessary. Sociologically, people who have never been exposed to these false theories are not influenced by them; and they regard them, correctly, as strange and unconvincing. They will therefore be disinclined to vote for politicians who espouse them. They will think, in short, that they are bullshit, and they are not wrong to think that. If a political party becomes strongly associated with such theories, or their practical applications, it will lose support among the non-college-educated population. The pronoun mania will drive them away from a party that indulges in it. And this is likely to carry over to good theories too, because they will be tarred with the same brush by the skeptical electorate. The party that opposes all that shaky theory-mongering will gain ascendancy—it will become the party of “common sense”. Does any of this sound familiar?

The linguistic theory of thought is not the only academic theory that has captured the minds of impressionable people, usually young people. But it is a particularly powerful example of the general phenomenon: pronoun reform has led many zealots to believe that they have here an undeniable victory against the old guard. This set the tone for further theoretical incursions into politics. The patriarchy had seeped into our native language and needed to be rooted out if political reforms were to be achieved (often laudable enough in their own right). An academic theory thus led to policy recommendations. Are there other examples that mimic the pronoun paradigm? They are not far to seek. Consider the push for “diversity”. Suppose you believe (as many do) that truth is relative; there is no such thing as “absolute” or “objective” truth. You have had this drummed into you by your professors, and anyway it sounds vaguely egalitarian to you. Then you will primed to accept that diversity is a good thing: if there is no such thing as a single objective truth, shouldn’t we encourage a plurality of viewpoints on “the truth”? Let’s gather the many truths that people accept: these will be best found by assembling a diverse group of people. Thus, “diversity hires”. The trouble with this line of reasoning is that the motivating theory is terrible: truth is not relative. Again, I’m not going to argue the matter here—I am making a political point. Those who accept the underlying theory will find it self-evident that diversity is a value to be promoted, while those who have never got the relativist memo will be perplexed by the urge towards diversity. The latter don’t think truth is relative, so there is no need to hire a diverse group of people to teach a bunch of relative truths. All this will seem like mumbo-jumbo to them—and I think they are right so to think. They will not be inclined to vote for a political party that advocates such mumbo of the jumbo. That party will become, in their minds, the party of the wacky, the nonsensical, the phony. College graduates may be more tolerant of such fantasies, given their educational background, but the rest of the electorate will not be taken in. Maybe there are counterintuitive theories that should be accepted (I am thinking of sound economic theories), but the tendency will be to suspect any political party that traffics in silly-sounding theories. The same is true of all the recent talk of “power imbalances”. Again, this comes from theoreticians in the weaker areas of the humanities (e.g., Foucault)—the idea that “power imbalances” imperil “agency”. There cannot be voluntary liaisons where there are asymmetries of power, it is held. This is utter rubbish, but it can be made to seem plausible by choosing certain kinds of example and keeping the language abstract and abstruse. Impressionable minds are easily manipulated by this kind of sophistry. But to those who have never been subjected to such “teaching” it will seem preposterous, pretentious, and plain stupid. How can people who mouth this kind of crap be trusted with running the economy? That is what people will think; and they will desert the party that promotes it as superior virtue. Nonsense does not win elections, especially the higher type of nonsense, the type found in universities. Not that everything taught in universities is nonsense, but some of it is—and it leaks into political policy and rhetoric. Plain language, plainly spoken, is what is needed to win elections.

What are the implications for democracy of these reflections? If the college-educated part of the electorate supports a particular party, that party is likely to champion intellectual ideas and theories originating in universities. These ideas will almost certainly include dubious theories, often absurd theories, which are then applied to practical issues. This will alienate the non-college-educated part of the electorate, leading them to fall into the hands of the more untheoretical and philistine populist party. Thus, depending on the proportions of educated and non-educated voters, the former kind of party will find itself unelectable. It will need to purge itself of these theories, at least as elements of the party platform. Do not defend abortion as justified by women’s’ “bodily autonomy” (“a woman has the right to control her own body”). Do not insist on strict pronoun rules or punish deviations from the approved norm. Do not speak of “safe spaces” or “power imbalances” or use any jargon invented by humanities professors. It’s asking for trouble. Keep ideological feminism out of it. Never talk of “defunding the police”. Don’t tell people how to pronounce unfamiliar words. Avoid obscure and unmemorable acronyms like “LGBTQ”. Don’t use phrases like “critical race theory” even if the denoted theory is perfectly sound.  Above all, never import academic theories of dubious credentials into policy discussions. If in doubt, consult an expert in the field at issue, especially when there is academic controversy, which there nearly always is. The generalizing use of “he” should never have been stigmatized, looked down upon, viewed as a sign of moral illiteracy. This kind of attitude could destroy democracy.[2]

[1] Some years later I wrote an article on the analysis of knowledge for an American publication. I used “he” a good deal. When it came back to me the copy-editor had re-written the whole thing to fit the new pronoun orthodoxy, with many a cumbersome paraphrase and lumpy grammar—without my permission. That is how entrenched and taken-as-gospel it had become. I should have made a fuss then, but I let it go. Now I see that it wasn’t just harmless pedantry in a good cause; it was the root of the linguistic political correctness that threatens to undermine left-wing politics. Nowadays it’s hard for me to declare myself a lefty liberal.

[2] I write this because of all the handwringing occasioned by the recent election of Donald J. Trump. What caused the American electorate to desert the Democratic party? No doubt many things, but I am focusing on one thing that is not usually mentioned, viz. bad ideas born in the academy migrating into practical politics. This is a vice of the party of the college-educated, leaving others cold. The left has clearly succumbed to theories invented by academics, mostly bad theories: postmodernism, social constructionism, linguistic idealism, feminist ideology, relativism of all kinds, bad psychology, etc. In a word, it has become pseudointellectual.

Share

A Political Song

Were You Ever

 

Were you ever right

When you arrived that night

With your books and your guns

With your daughters and sons

 

Were you ever right

 

You landed and looked

You built and you cooked

You cut down and burned

You rampaged and spurned

 

You landed and looked

 

Did you do right

When you used all your might

To bring them in ships

To work in your fields

 

Did you do right

 

You bought and you sold

You made them grow old

You forced them to change

You locked them in chains

 

You bought and you sold

 

Are you sure it was good

When you imported more men

To toil under ground

Until soon they were dead

 

Are you sure it was good

 

Did you ever care

Do you feel it was fair

Did you really mean well

Did you really do well

 

Did you ever care

 

Was it part of the plan

To elect such a man

Is that what you need

To see them bleed

 

Was it part of the plan

 

Were you ever

Were you ever

That beacon of light

Did you ever

Did you ever

Do right

 

Did you ever

Did you ever

Share

Ex-Friends

Ex-Friends

I have many ex-friends. Consider the case of Mark Rowlands: this is a person who I’ve known for forty years. I supervised him at Oxford; I asked him to contribute to a series I was editing on ethics; I brought him to Miami; I saw him every day for lunch when I was in the department; my wife was good friends with his wife. I have not set eyes on him in over ten years and I live close to him in Miami. I have tried many times to arrange a meeting with him. After giving me the runaround for years, he finally made it clear that no meeting was going to happen. No explanation was given. Or Otavio Bueno: I was instrumental in bringing him to Miami as a junior member of the department; I befriended him when he arrived; we were on good terms. That friendship is over. No explanation given. Some nasty emails exchanged. Told me I was not welcome on campus. Or Aimee Thomasson: I literally saved her life when she was in danger of drowning; she came to my house and I to hers; we had a good relationship. I reached out to her to discuss things, but got no response. She publicly took against me. I haven’t seen her or heard from her in over ten years. I could mention many others. You might want to ask these people to explain themselves, because I have no explanation from them. Cancellation can get very personal.[1]

[1] I should say that I have many academic friends who have not distanced themselves from me.

Share