Thumb Fretting

Thumb Fretting

As dedicated readers know, I am all about the hand. Lately, my left hand has been impressing me mightily: it has been throwing knives with confidence and panache; it has really come into its own on the tennis backhand; and its fingers have been performing nimbly on the guitar. Even my left baby (“little”) finger has been out to prove itself and has won several medals. However, I have not come before you today to celebrate my left hand in its entirety, though that would be perfectly apropos (it is doing a handy job, even if it says so itself). Instead, I want to single out my sinistral thumb, and for a very special reason. Drum roll, please: it has learned how to fret all by itself! I mislead you not, Jeeves (and Gussy Finknottle will back me up on this). I don’t mean that boring business of the thumb peeping over the neck of the guitar to hold down the E and A strings that that annoying fellow Jimi Hendrix made canonical. Oh no: I mean playing whole licks with the thumb. I know, you don’t know what I am talking about, but remember the hand has a mind of its own (it commandeers vast areas of the brain). Today it taught me how to play guitar using only the thumb for fretting! An example might help: just as you can use only one finger to play the lick in Day Tripper, so you can employ the thumb to do the same. You just press the strings down using your thumb with the rest of your hand resting behind the neck, cradling. The great thing is that your thumb is big and strong (the Goliath of digits) and does what you tell it to do. It has been dying to demonstrate its mettle on the strings, as opposed to lurking unseen behind the neck. There is very little learning curve (Jeeves, are you still listening?). You will be amazed: your thumb will veritably dance over the strings, or else I am not Link Wray. It’s even better than your index finger! True, it is only a solitary digit, but what a digit! Within ten minutes I was playing like a thumb demon (new band: The Thumb Demons). You know what I did next: I googled the blighter. To my everlasting surprise, the search brought up nothing (except that feeble Hendrix trick). Evidently, I have invented a new way to play the guitar, or my left hand has. Credit where credit is due. I will keep you posted on future developments… Jeeves, to where have you disappeared, old chap?

Share

Psychology of Philosophy

Psychology of Philosophy

Probably every field of study has its own distinctive type of psychology. A certain type of mind will be drawn to a particular subject. It is not difficult to see how this pairing proceeds. If you are interested in people, you will naturally be drawn to psychology; not so if you are fascinated by numbers. If words fascinate you, linguistics will attract you. It is safe to say that historians are interested in the past, but not so much in general theories. A fondness for animals may lead you into zoology. A desire to travel may bring you to geography. If the stars grab your attention, astronomy may be your calling. If money excites you, economics may be your chosen area of study. Certain talents and abilities will figure into this: what you were good at in school. Extraversion or introversion may take you in one direction or another (e.g., politics). But what psychology orients people towards philosophy? That is not so obvious, given that philosophy has no well-defined subject-matter. Would anybody say they had a childhood fascination with concepts? Is it a love of paradoxes and puzzles? Is it a desire to argue? None of that sounds very plausible. Is it simply masochism? My best explanation is that it is a liking for certain sorts of language—philosophical language. The words resonate in your head; they feel good on the tongue. They seem impressive, profound. The philosophical personality is linguistically primed, smitten with the jargon, enamored of the sound of the sentences. It isn’t that philosophy is about a certain range of objects; it’s the way it talks that engages the passions. The philosophical personality above all wants to speak like a philosopher—to be master of the vocabulary. He or she may also have a weakness for depth and difficulty, and the language of the subject is thought to help with overcoming this weakness; the philosopher is a deep speaker as well as a deep thinker. A philosophical education is largely acquiring a certain kind of verbal skill—in speech and writing. You learn how to talk the talk.

This connects with a certain characteristic of philosophy: the tendency to be enamored of certain words and phrases. These may come and go; they seldom persist forever. Here is a selective list: form, substance, idea, fact, experience, reason, a priori, a posteriori, analytic, synthetic, necessary, contingent, analysis, logical form, truth condition, criterion, identity, family resemblance, speech act, sense-datum, rigid designation, possible world, noncognitive, normative, what it’s like, reductionist, anti-realist. Where would we be without these words? They roll so deliciously off the tongue. They sound so imposing, grand, profound, scholarly. It is a pleasure just to be around them. And yet they can be slippery, poorly defined, and misleading. The go in and out of fashion, one day greeted with an approving smile, the next with a condescending sneer (“Oh, you still believe in that rubbish”). They are, let’s face it, disturbingly meme-like: buzz words, catch-phrases, verbal tics. They are more substitutes for thought than real thought. You must have heard people (typically graduate students) who half-know how to use them or use them obsessively (“epistemological” in every sentence). They aren’t a way to think clearly but to obfuscate and bamboozle. They tend to go unexamined, trotted out not scrutinized. They lend themselves to obscure verbal altercation. This is their psychology (psychopathology)—the psychology of philosophy.

It is hard to know what to do about this situation. We can’t ban them; they perform a useful service (as memes often do). They are not all bad. The best I can suggest is that they should be handled with care, responsibly, and used in moderation. Don’t litter your speech and writing with them. Don’t rely on them to do your thinking for you. Don’t let them dominate your philosophical consciousness. Keep them at arm’s length. Be suspicious of them. I am as guilty as the next man—I use them all the time. But I feel guilty about it, as if they are shaky crutches rather than sturdy limbs. I would like to do without them, I really would; and one day I will (I tell myself). They have grown up (sprouted) over time, at certain periods, for certain purposes, and they have stuck, for good or ill; they are not the result of strict screening and rigorous peer review. So, don’t use them too easily or heavily, and only when you need to. It is not a virtue to use them but a vice (or ingrained habit). The philosopher needs to clean up his psychology: he went into the subject because of his love of the jargon (to put it unkindly) and now he needs to clean house, tidy the place up. He needs to root out the termites of thought—those insidious little memes that eat away at the foundations of reason. Or rid his mind of verbal junk, however superficially appealing (the fast food of philosophical thought). You can keep it in some form, but don’t live or die by it. Don’t let it call the shots.[1]

[1] It was a virtue of ordinary language philosophy to discourage technical jargon (though it may have gone overboard). Some people are certainly worse than others.

Share

Elvis, Paul, and Mick

Elvis, Paul, and Mick

Some bands achieve considerable success but without mega-success. Elvis and the Beatles created worldwide mania (and hysteria); the Who and the Troggs did not. True, Elvis and the Beatles were supremely talented and enormously productive, but their success exceeds such attributes. Why? The Stones are an intermediate case: large success but not absolute mania and adoration. You might say they were not as musically gifted as the Beatles and Elvis, but the difference in popularity and impact exceeds this gap. The answer is staring us in the face, literally, and it is undeniable. Elvis and the Beatles were extremely handsome—the girls loved them. You might say that Elvis was more handsome than the Beatles, and that would be true, but Paul McCartney rivaled Elvis for good looks. As the other Beatles recognized, Paul was incredibly good-looking; he had the Elvis touch. I suggest, then, that this was the missing ingredient in the popularity of these two entities—Elvis Presley on the one hand and John Lennon, Paul McCartney, George Harrison, and Ringo Starr on the other. They would not have achieved the level of success they enjoyed were it not for the good looks of Elvis and Paul. The girls adored them and the boys envied them. Physical beauty is the key. Once a specimen like that opens his mouth to sing the floodgates simultaneously open—so long as he has a good voice (and both did). It isn’t musical genius but physical appearance that makes the difference. No one in the Who had that degree of male magnetism (and only Paul had it in the Beatles, though the other lads were also pretty handsome). Elvis and Paul were gorgeous. As to Mick, well, he’s not in that league, but I venture to suggest that Mick’s face is what led to the extreme success enjoyed by the Stones in their heyday (Pete Townsend in his autobiography confesses that he fancied Mick). Mick was undoubtedly a very sexy guy. He wasn’t a god, like Elvis and Paul, but he had it going on. The reason the Stones were massive, and still pull big crowds, is Mick’s physical attractiveness. Even if the Beatles and the Stones had made only their first few records, they would still have been bigger than all the other bands in sheer popularity. Elvis, Paul, and Mick: three incredibly attractive bastards. This is what tipped them into mega-success (Queen, say, stood no chance).

Share

The Part Problem

The Part Problem

People talk about the mind-brain problem, but that is strictly inaccurate. The problem isn’t about how the brain as a whole produces consciousness; it’s about how some of it does. It isn’t about how the brain differs from other bodily organs; it’s about how certain parts of it do. The various parts of the brain look very similar and function similarly, but only a subset of brain parts have the magic touch. What makes them so special? What ingredient do they possess that other parts don’t possess? This is what I am calling the part problem. It refocuses the so-called mind-body problem: how do you convert a bit of the brain that doesn’t produce consciousness into a bit that does? It can hardly be a change of location, as if transplanting neurons from the brain stem into the frontal cortex will magically transform them into agents of consciousness (or if it does, we would want to know how). Nor are the consciousness neurons bigger or brighter than the non-conscious neurons. Nor do they fire more rapidly. There seems to be no chemical or anatomical difference. And it can hardly be supposed that God arbitrarily chose these neurons to be consciousness-bearing ones (“I anoint thee guardians of the soul”). The part problem looks like a hard problem—as hard as the mind-body problem, or even harder. For it introduces an element of arbitrariness into the picture: all neurons seem the same yet only some have the power to produce consciousness. Then, in virtue of what do they have this power? Is there just no answer to this—is it just a freak of nature, or evidence of a hidden world? But why this portal instead of that one? It’s like supposing that a given chemical can explode or not explode without changing. Is it the environment of the neuron that makes the difference? But why? The puzzle is almost violently difficult. We might call it the infuriating problem.

Take a part of the brain that has no consciousness associated with it. Now consider a neighboring part that does have a conscious correlate. How does one part grade into the other? Is there some intermediate state of semi-consciousness? Can there be consciousness migration or consciousness spread? Could they change places consciousness-wise? Is the difference exceedingly subtle or quite manifest? Could a tiny variation of shape make all the difference? Neurophysiologists have looked at both types of cells under the microscope and detected no physical difference, and yet the difference could not be more marked. What proportion of brain cells are mentally employed? I have never heard an estimate. Is it mainly unconscious? Is this ratio the same for all animals with a brain? If so, why? When the brain dies consciousness disappears from its precincts, the light goes out, but what physiological difference between the two types of cells accompanies this, if any? If you were trying to build a conscious machine, would you install two types of components, corresponding to the two types of neurons? Is one a necessary condition of the other? When brains evolved did the unconscious neurons appear first and the conscious ones build on this foundation, or were there always the two types? How are the two developmentally scheduled in embryogenesis? Is one more active than the other? There is surely some physical difference, but we are damned if we can see what it is. The whole set-up seems contrary to reason, and yet these are the facts. This division of labor within the brain is a complete mystery. Why are some neurons not conscious? They ought to be if others are. This conundrum appears to favor some sort of substance dualism: are we then to take that possibility more seriously than we have? Materialism seems defeated in the face of it. But then we will be confronted by the problem of how and why the mental substance selects certain neurons as its locus of influence and not others. We thought that neurons contained some special mental ingredient that sets them apart from other cells in the body, but it turns out they don’t, since some neurons have nothing to do with consciousness. So, are neurons really beside the point? Some are correlated with consciousness, but this may not be because they produce consciousness, since some don’t. But then, we are completely in the dark about the basis of consciousness. Maybe we are looking in the wrong place altogether. The part problem makes the mind-brain problem even harder, even more maddening.

Here is a comparison to make the point vivid. Suppose you are investigating water and are interested in explaining its liquid state. You arrive at the theory that liquidity is the loose arrangement of the constituent H2O molecules. Well and good, you think, we have that problem under control. But then you discover, to your surprise, that not all H2O molecules loosely bonded are liquid! How can that be? The molecules are exactly the same as in the liquid water but the water is not liquid! Obviously, your theory of liquidity is wrong, but you can discern no difference between the two cases: here liquidity, there no liquidity—but exactly the same chemical make-up. It seems impossible, but it is an observable fact. Nature has gone mad, or you have (are you hallucinating?). In the one case, the micro properties produce liquidity; in the other, they do not. How can this be? Is there some other source of liquidity that you are somehow missing—a sort of immaterial liquidity?

Share

Searle on Mind and Brain

Searle on Mind and Brain

Searle maintained that the mind is a higher-level property of the brain, not a separate substance. There is only the physical world with higher- and lower-level descriptions. This is his solution to the mind-body problem. He liked to compare the mental to the liquid: there is only a world of H2O molecules (lower-level) with liquidity (higher-level) tacked on. The liquidity follows from the molecular composition; it isn’t another thing. Thinking is to neurons what liquidity is to molecules. Problem solved. But is it? It is a good (and familiar) thought that the mind is an aspect of the brain not another separate entity, but is the relation between brain and mind like the relation between H2O molecules and liquid water? Searle would say we can’t find liquidity in individual molecules—they aren’t liquid—but in the aggregate liquidity is the natural outcome (the molecules slide around each other). Liquidity is an aggregate property (“holistic”) not a component property (“individualistic”). Similarly, consciousness is an aggregate holistic property of individual neurons. There is no mystery here, just the logic of wholes and parts, collections and their members. But the problem with this idea is glaring: consciousness isn’t an aggregate property—it is both more and less than that. Neurons aggregate into ganglia and brain regions (e.g., the hypothalamus), but these aggregates are not states of mind, just more complex chunks of brain tissue. The neurons are not elements in a cerebral soup (the brain isn’t liquid) but parts of a relatively solid object. The solidity of the brain is not the mind. But what other relations between neurons could add up to the mind? Nothing we can discern. The neurons are precisely unlike H2O molecules; their composition does not produce mentality according to basic physics and chemistry. Nor does it seem possible for neurons to aggregate into minds. If we liquify the brain, we don’t produce the mind! So, the analogy is exactly wrong: the mind is nothing like liquidity (or solidity or gaseousness or a wrinkled shape or a chestnut). It rather illustrates the nature of the real mind-body problem: we have no account of how the brain produces the mind—no account at all. It isn’t a matter of mere detail; we have no idea how the brain can have a mental aspect. The only aggregative properties we know of don’t produce it. And it looks as if no amount of aggregating and interrelating will ever lead from neurons to thoughts and sensations. The mind is not a feature of collections qua collections. It is not a macro feature of collections of micro entities. Searle’s favorite analogy thus disproves his own theory; it shows up the glaring lacuna at its heart. All the hard work has to be done at the level of relating (collections of) neurons to mental phenomena; that is the problem. And the threat of dualism still looms over us: maybe the brain doesn’t and can’t produce consciousness from its own limited neural resources. Neurons are like molecules that won’t slide over each other. No amount of insistence that consciousness is a biological phenomenon (true as that is) will overcome this problem. Searle’s theory is at best a place-holder for a theory not a theory.

Share

Animal Induction

Animal Induction

Hume argued that induction is based on custom not reason. We believe in induction because we are psychologically built that way by nature not by ratiocination. He could have cited the case of animals: they act according to induction by instinct; they were not taught to do so or employ a priori reflection. We could say they have an inductive gene; it’s an adaptation brought about by natural selection. Induction is like a thick coat in the Arctic. The environment calls it forth. We evolved from animals, so our inductive dispositions have the same roots. This is why the skeptical argument makes so little impact on us. We are natural-born inductivists. Popper can’t budge us on this. Even if the uniformity of nature breaks down, we are genetically inclined to induction. In this sense we have an innate inductive philosophy. People talk about commonsense philosophy; we also have an instinctive philosophy installed by the genes under natural selection. Our instinctive philosophy is inherited from our animal ancestors going back a long way. We are not a blank philosophical state. Induction is an “innate idea”. It is like our philosophy of substances—a natural innate cognitive-behavioral scheme. We go by laws instinctively. We have an inductive brain(-stem). Our DNA is inductive.

This doesn’t prove that induction is a valid mode of inference. We cannot use it to prove that the future will be like the past; we might even stop behaving inductively! Just because we have always been inductive in the past doesn’t prove we will continue to be inductive in the future. Nature might also stop being uniform while leaving us still committed to induction. You never know. But we can infer that nature was uniform in the past, i.e., that our ancestors evolved in conditions of uniformity. For there would have been no induction in a world lacking in uniformity, because it would not have been adaptive. Adaptations evolve to suit the world surrounding the organisms in question, so we know our ancestors lived in a predictable uniform world. The inductive gene only gets installed if it is adaptive in the given environment. Hume would have enjoyed the post-Darwinian world of genes and evolution from earlier species.

Share

Etc.

Etc.

The hostages were released and President Trump may have had something to do with it.

Pam Bondi went full Mr. Hyde.

RFK talked more twaddle.

I detest and despise X, Y and Z.

I managed to play Wipe Out using only my little finger.

After 12 hours of interviews, I have reached my time at Rutgers.

I have started to make my own bread.

My taste buds may finally be back to normal.

My backhand is in full swing.

I am going to Cancun and Cozumel in two weeks with E.

My lizard Ramon is hibernating.

Share

Allegations and Obituaries

Allegations and Obituaries

Anybody can make any allegation against anyone. It means nothing. Allegations are not evidence. This is obvious, though often forgotten. The OUP gives us: “a claim that someone has done something wrong, typically an unfounded one”. An allegation is a claim not a report, an assertion not a fact. It is as easy to make an unfounded allegation as a founded one. The gap between saying and being is enormous and principled. It is not like the gap between sensory impressions and facts: an impression is evidence (though not conclusive evidence) of fact. An allegation is a voluntary act of speech causally untethered to reality, but a sense impression is an involuntary state on mind typically caused by what it purports to represent. It gives an appearance of reality, while an allegation does no such thing: when you hear an allegation it doesn’t seem to you that the world is as alleged. We should not confuse these two types of relation. An allegation is not a perceptual impression of what is alleged. It is not a sense-datum. A false allegation is nothing like a visual illusion. This is why allegations always stand in need of supporting evidence; you can never be convicted on allegation alone. Allegations are just words, and words are not necessarily connected to facts, nomologically or otherwise. Allegations never by themselves imply truth, even weakly. Just because somebody says it doesn’t mean it’s true. When we acquire knowledge by testimony, our evidential basis is not the fact of assertion alone but the evidence we have of the reporter’s veracity, which must be independent of the assertion currently being made, to the effect that the speaker is reliable. This never consists in further assertions by that speaker; the circle of assertion must be broken out of. No allegation can be supported by other allegations alone; nonlinguistic facts must be adduced. Allegations in themselves are epistemically null. Their existence does not raise the probability of what is alleged. Indeed, they may lower the probability if the speaker is a known liar or unreliable idiot or certifiably insane. Allegation is never demonstration, or even indication. The law is very clear on this and insists upon it, rightly so.

I say all this because newspapers (etc.) have taken to mentioning in obituaries that the deceased person has been alleged to have done such-and-such. This is a very bad practice. By all means mention actual findings against the person, if such there be, but don’t mention mere allegations. If the practice were consistently applied, obituaries would be stacked with the lies of the person’s enemies; lies would be manufactured for precisely this purpose. The only reason to mention allegations, often salacious, is to indicate or suggest that the allegations are credible; but no proof of credibility is given, and the facts often contradict the allegations. Unproven allegations should not be cited, because doing so conversationally implies that there is reason to believe them, but there may be no such reason. Innocent people have allegations made against them all the time from a variety of motives, so don’t buy into these allegations by mentioning them. And don’t reply that it is a fact that such allegations were made and you are just reporting facts: if I allege falsely and maliciously that you are a child murderer, should that fact (that I made this allegation) find its way into your obituary? Of course not. Mentioning allegations only gives readers the impression that there must be substance to the allegations, or else you wouldn’t mention them. Report findings of guilt not imputations of guilt: truth not opinion (or mere assertion). Surely, this is obvious.[1]

[1] The case of John Searle is a case in point.

Share