Epistemic Unity

 

 

Epistemic Unity

 

Epistemic unity among inquirers is an important goal of all inquiry. We strive to arrive at the same opinion on a given subject. We try to eliminate diversity of opinion, divergence of belief. The conscientious inquirer seeks consensus, convergence, homogeneity of belief. To this end we employ methods that reliably lead to identity of belief among inquirers; a method that reliably leads to belief divergence would ipso facto be defective. Why do we do this? Because reality itself is a unity, a single way things are. Truth is exclusive: what is true rules out what is false. If it’s true that snow is white, then it’s false that snow is purple; and if some people believe the latter, then their beliefs are false. Truth is not inclusive of the false. Since we aim at true belief we aim at unity of belief, because truth itself is unitary. Truth and epistemic unity go hand in hand. If there were many conflicting truths (snow is both white and purple), then epistemic unity would not be a value; on the contrary, we would strive for epistemic plurality. It is the same with logical validity: only one conclusion follows from a given set of premises. We can’t infer both p and not-p from the same set of premises. Logic excludes certain inferences while including others. To put it differently, the enterprise of knowledge is inherently exclusive and unitary, not inclusive and diverse. This is why we have refutation and falsification—in order to weed out error and falsehood. The enterprise of acquiring knowledge is inherently rejecting and selective, not accepting and indiscriminate—indeed, it must be discriminative (with respect to belief not people). Inevitably this will lead to distress, disappointment, and discomfort: it’s hard to see your pet theory refuted! But that is the nature of the game: the ideal of epistemic unity guarantees it. Ideally, then, everyone will in the end share the same set of beliefs—those that correspond to reality (which is necessarily one way rather than another). Rationality itself prescribes interpersonal identity of belief, a single right way to believe. Differences of opinion are contrary to the dictates of reason, i.e. when they concern matters of fact.[1] Reason does not welcome all viewpoints; it opposes some viewpoints and favors others. In an ideal world everybody would agree. We might think of human history as a gradual progression towards complete epistemic unity, in which all divergence, disagreement, and diversity have been expunged. We are all one, epistemologically speaking, or should be.

            What I have just said is mere truism—a set of platitudes about truth, reason, belief, and intellectual inquiry. But notice how it conflicts with a rhetoric that emphasizes diversity, inclusiveness, and equality. The enterprise of knowledge is against those values if applied to the methods of rational inquiry: it recommends unity, exclusivity, and inequality (beliefs are not equal one of them is false). Of course, it may accept such values if intended differently, but it will insist that the opposite values must obtain in the sphere of intellectual endeavor. It will certainly object to any effort to bolster those different aims by appeal to the nature of rational inquiry itself. Aiming at identity of belief is not aiming at other sorts of identity. What is interesting is that the exclusive and homogenizing nature of rational inquiry can (and must) coexist with recognizing other sorts of diversity and inclusiveness. The two must not be muddled or glossed over. It is entirely consistent to insist on stringent adherence to unity and exclusiveness in intellectual matters while accepting that disunity and inclusiveness can operate along other dimensions. The community of inquirers seeks unity of belief but not unity of race, sex, sexual preference, taste in music, height, pulchritude, and so on. Of course, the exclusionary aspects of truth and knowledge may be upsetting to some—it may not cater to their “wellness” or “comfort level” or sense of “safety”. But knowledge is not in the business of therapy: knowledge does not seek to soothe or flatter or condone. These are completely different enterprises: a university may cater to one, a hospital to another. A teacher is not a therapist. And there is no escaping the fact that rational inquiry is inherently “discriminatory”: it seeks to discriminate the true from the false, the justified from the unjustified. It has no “tolerance” for error. It sets its face against diversity of belief, if that means allowing false belief to flourish. Education is the dissemination of shared true belief, not the celebration of divergent beliefs. An education in geography, say, is not “open” to divergent geographical beliefs, allowing for the belief that Africa is smaller than England. Education forms an exclusive club—those that have it and those that don’t. This applies as much to electricians and plumbers as university professors (you don’t want much diversity of electrical knowledge if your power goes out). Students need to understand that they are being initiated into an exclusive fraternity that prizes a specific type of knowledge. They should also understand that cognitive identity is the aim of the education they receive: they are to be brought to be identical to each other with respect to knowledge. Despite other differences, they converge in this one vital respect—what they know. This unites them. So the unity of knowledge brings about social unity—an important value. It should not be forgotten or occluded or demonized: people can be brought together by means of education provided that it is recognized that epistemic unity is possible and desirable. In a world in which irresoluble diversity of opinion is deemed tolerable and even celebrated the virtues and benefits of shared knowledge will be lost. We should insist on fostering epistemic unity not questioning it. Exclusiveness, unity, inequality![2]

 

[1] Of course there can be culinary or aesthetic differences of opinion, and it is often not easy to discern the truth, but this doesn’t undermine the idea that we should seek convergence of belief, i.e. convergence on the objective facts of the matter. Epistemic unity is a “regulative ideal”.

[2] I would put moral education and moral knowledge at the top of the list. Moral belief does and should operate exclusively, ruling out immoral or evil beliefs. Ideally, all moral beings should agree in their moral convictions. And not all moral beliefs are equal: some are better than others. Determining the correct morality is another matter, but moral knowledge seeks epistemic unity, like all knowledge.

Share

Quantifiers and Mass Terms

 

 

Quantifiers and Mass Terms

 

The usual approach to quantification focuses on quantifier words combined with count nouns, as in “All men are mortal” and “Some sheep are black”. We are told that such sentences require a paraphrase by means of variables ranging over objects (the “domain of quantification”)—“for some object x etc.”. But there are quantified sentences that employ mass terms not count nouns, as in “All coal is black” or “There is some milk in the fridge”. No objectseems to be meant—no x such that… It makes sense to ask how many objects are thus and so in response to a quantification using a count noun, but we can’t ask how many coal are black or what the number of milk is. We can ask how much coal or milk there is in a certain location, but not how many coal or milk is there. Mass term quantifications don’t “range over” a class of countable objects that can be assigned to bound variables. That is not how their semantics works. Yet the logic of such sentences follows the logic of count noun quantifications: from “All coal is black” and “This is coal” we can infer “This is black”. So it looks as if the standard semantics doesn’t capture the logical implications that are involved. This means that predicate logic is inadequate to capture quantifier entailments. Not good!

            It might be thought (hoped) that mass term quantification can be analyzed by means of count noun quantification; we just need to bring in an ontology of chunks, lumps, pieces, volumes, morsels, and smidgens. Thus we have “All chunks of coal are black” and “There is a volume of milk in the fridge”. This already sounds stilted, but it also trades on a correspondence that doesn’t amount to a paraphrase. It is true that we can manufacture a truth about objects by employing such dummy sortals, but that doesn’t mean that the two mean the same thing. Likewise we can contrive a mass term quantification for any count noun quantification, but it would be wrong to claim semantic equivalence. Consider “There are many cod fish in the North Sea” and “There is a large amount of codfish in the North Sea”, where “codfish” functions as a mass term like “salmon” or “halibut”. Where there are fishy objects there is fishy stuff, and where there is fishy stuff there are fishy objects. But does anyone think we can analyze “All men are mortal” as “All man stuff is mortal”? One type of sentence speaks of objects of a certain kind; the other type speaks of stuff of a certain kind. Objects and stuff go together, but talking of one is not talking of the other. Also, the paraphrase in terms of dummy sortals doesn’t always get the truth conditions right: it may be true that all coal is black but not that all pieces of coal are black, because some pieces might be too small to have color; and the milk in the fridge might be scattered about not collected into a discrete volume. And isn’t it consistent to reject an ontology of objects while accepting that the world contains stuffs? There is coal and milk and gold and blood but there are no objects corresponding to these stuffs (“stuff metaphysics”). Stuffs are manifested at locations, according to this view, but there are no real objects that fall under mass terms. You can be an eliminativist about objects but a realist about stuffs. There is certainly no obligation to accept the count noun paraphrase, clunky as it is. It looks like an ad hoc attempt to save a theory not a natural semantic analysis.

            Note that this point also destroys Russell’s theory of descriptions: for we also have “The milk in the fridge is off”, and this can’t be paraphrased by quantifying over milky objects (“There is a unique volume v such that v is milk and v is in the fridge and v is off”). Mass term definite descriptions don’t mean the same as corresponding count noun definite descriptions. Russell’s theory works smoothly enough for descriptions that speak of objects but not for descriptions that speak of stuffs—kings of France but not amounts of butter (“The butter on the table is rancid”). True, we can still use quantifier expressions to paraphrase such descriptions, but these expressions can’t be analyzed by using the standard apparatus of variables and domains of quantification. This is no more plausible than supposing that object quantification can be analyzed using an ontology of stuffs—as with “cod fish” and “codfish”. Objects are made of a certain kind of stuff, but talking about objects is not talking about the stuff they are made of. So it turns out that Russell’s theory, as normally formulated, is too wedded to the standard analysis of quantification; we need to broaden that analysis so as to take in descriptions that employ mass terms.

            So what is the correct analysis of such quantification? I don’t know: it seems to defy the usual type of semantic construction built out of words for objects and properties. What does “some milk” mean? Part of the problem is not knowing what “milk” means: does it refer to the aggregate of all volumes of milk (whatever exactly that means), or is it an abstract singular term denoting the Platonic form MILK, or is it not referential at all? And is some milk a part of the reference of “milk”, or an instance of it, or not a relation to it at all? The semantics of quantifier phrases like this (“most milk”, “a lot of milk”, “nearly all milk”, etc.) is obscure. So we really don’t have a decent semantics for quantifier words as they occur in natural language; and predicate logic (“quantification theory”) is a poor representation of the logic of quantifiers. At best it deals with a restricted class of quantificational inferences. People like to proclaim the great success of modern logical symbolism in capturing the logic of quantifiers, but it turns out that this is only half the story; a lot of quantifier logic is not captured by this symbolism. It works nicely for mathematical quantification because arithmetic isn’t about numerical stuff, but for stuff-related quantification it doesn’t even get off the ground. We already knew that mass terms make trouble for the semantic categories recognized in formal logic (are they predicates or individual constants?); well, it turns out that they also upset the usual theory of quantifiers, both standard and non-standard. Logicians might want to drink some warm milk.[1]

 

[1] I wouldn’t be surprised if linguists have noted the linguistic phenomena here recorded, but my knowledge of linguistics doesn’t extend far enough to be sure. There was a flurry of work about mass terms a couple of decades ago in philosophy, but I don’t recall any mention of the difficulties posed by mass term quantification.

Share

Footnote to “Why Does Philosophy Exist?”

[1] I suspect some of the hostility to philosophy in certain academic circles arises from a sense that philosophy has no right to exist—that it is just an institutional holdover from earlier times. For the subject seems to persist without solving its problems and yet there is no good explanation of this fact. So the only reason for its presence must be the heavy weight of academic tradition. It would be different if philosophy dealt with problems about the incredibly small or impossibly remote! Then its lack of progress would be perfectly understandabe.

Share

Why Does Philosophy Exist?

 

 

Why Does Philosophy Exist?

 

It is easy to see why most subjects exist. Geography exists because planet Earth is divided into parts that can be mapped: there are geographical facts that can be ascertained. Physics and chemistry exist because the world contains physical and chemical facts (objects, properties) that can be discovered. Psychology exists because there are minds that can be investigated. Biology exists because there are organisms. Mathematics exists because the world can be counted and measured. Ethics exists because people do right and wrong things. But why does philosophy exist? Is it because the world contains philosophical facts that can be ascertained? That sounds wrong: philosophy consists of problems raised by the non-philosophical world—problems of a philosophical nature. If there were no such problems, philosophy would not exist (the other subjects would still exist even if there were no problems in them). Other subjects may investigate the same phenomena as philosophy, but they do so non-philosophically (the “adverbial theory of philosophy”). So what produces this distinctive type of problem? Following the example of the other disciplines, we might suggest that the world contains philosophical objects, properties, events, and facts–but that sounds like a category mistake. We must rather ask what gives rise to philosophical problems, this being what constitutes the subject matter of philosophy.

            Three suggestions are familiar: reality, concepts, and language. Thus it might be supposed that reality contains philosophically problematic entities such as consciousness and free will. But reality isn’t uncertain about whether it exists or what its nature is: things are not objectively philosophically problematic. Things are philosophically problematic only in relation to us (or other intelligent beings). As God surveys the world nothing stands out to him as philosophically problematic. The uncertainties and theoretical rivalries of philosophy are not mirrored in objective reality: the world is what it is and not some other thing. If the mind is really an immaterial substance, then that is what it is, philosophical disagreement be hanged. Philosophy arises from the state of our knowledge not from the state of reality. It is not that one kind of entity is intrinsically “more philosophical” than another—though one kind of entity can produce more philosophical puzzlement in our minds than another. Meaning is more philosophically challenging than syntax and phonetics, but to an omniscient mind they would be on a par. So the source of philosophical problems (i.e. philosophy) can’t be reality considered sub specie aeternitatis. A second idea is that philosophy arises from the nature of our concepts: our concepts are inadequate or misleading in some way, and this causes philosophical perplexity in us. Perhaps they are superficial or confused or contradictory or just plain crude—in any case, they bring philosophy into existence. If they could be revised or reformed or replaced, we could make philosophy disappear by removing the source of its problems. Philosophy exists because of the defective nature of our conceptual scheme. This view raises many questions: How exactly do our concepts give rise to these problems? Why do we have concepts that lead to such problems? What is it about our concept of consciousness, say, that leads us to the philosophical difficulties we encounter? What would a concept of consciousness be like that did not lead to philosophical problems? It is hard to believe that our concepts could have philosophical controversy baked into them, and into only them; they seem to function perfectly well most of the time, so why do they trip us up when we start philosophizing? It can’t be their constituent structure or their ease of combination. How could human thought necessarily lead to philosophical conundrums—in virtue of what property of it might this happen? Third and familiar, there is the claim that language is the source of all the trouble: reality itself is not philosophically problematic, our concepts are in good order, but our spoken language systematically misleads us (“bewitches” us). But why should our language exercise such enormous power—the power to generate the ancient and venerable problems of philosophy? Can’t we just ignore it, as we ignore parts of it already? It doesn’t have to dominate our thought any more than its sounds do. And what serious philosophical problem has ever been resolved by exposing the supposed logical defects of our natural spoken language? The whole idea is preposterously optimistic. If it were on the right lines, we should have put a stop to philosophy long ago, by judicious attention to linguistic forms. It is simply not credible that philosophy exists because our language is misleading (despite a spate of twentieth century enthusiasm for the idea). So the standard suggestions don’t work.

            But we have not yet run out of possible theories. Might it be that philosophical issues are like political and practical issues in the sense that there is something to be said for several different positions on them? Is Scottish independence a good idea, should the British monarchy be abolished, is Turgenev as good a writer as Tolstoy? There is irresoluble controversy about such questions, as there is about philosophical questions, so perhaps this is why philosophy exists. But this is a bad analogy and the underlying conception of philosophy is mistaken. The issues cited are practical, political, or aesthetic, but philosophical issues are not like that; and there is surely a fact of the matter about the relation of mind to body not just a debatable question about which opinions may reasonably differ. This is why philosophers don’t say, “I can see different points of view on this issue, but I think the wisest course is to adopt theory T”. In this respect philosophical problems are like problems in the sciences—questions about what the facts are, not questions about what stance to adopt all things considered. It doesn’t come down to what to do or think “on balance”. Nor do philosophical problems owe their existence to remoteness in time (like history) or distance in space (like astronomy) or being too small to see (like atomic physics) or being private and unobservable (like psychology); many of them are about present-day perceptible nearby things. Philosophical problems have an obscure etiology not explicable in terms of the usual kinds of inaccessibility. We have viable theories of ignorance in other fields, but in philosophy the ignorance is itself mysterious: we don’t know why philosophical problems are so difficult. In fact, it really shouldn’t be hard to know what knowledge is, say, since we have knowledge and can introspect our epistemic state; and consciousness is arguably the best-known thing there is, yet completely mysterious. Or time, space, matter, causality, value, perception, meaning, and so on through a familiar list. These are all extremely proximate and yet extremely puzzling.

            Here is another idea, suggested by Thomas Nagel’s work.[1] Philosophy arises because of a clash between subjective and objective viewpoints: for example, we can view ourselves from the inside by adopting a first-person point of view, or we can view ourselves from the outside as embodied beings in an objective world of space and time. Now we are approaching the question in the right way, trying to find what is unique to philosophy: perhaps the problems of philosophy arise from a need to integrate clashing points of view generated by a difference between subjective and objective perspectives. But there are problems with this approach, concerning necessity and sufficiency. It doesn’t seem like a necessary condition for the existence of a philosophical problem that it involves a clash of subjective and objective perspectives, since the problems can arise from within a purely objective perspective—for example, relational versus absolute theories of space, or different theories of time, or whether freedom is the ability to do otherwise or just doing what you want, or whether perception is direct or indirect, or whether causality is just constant conjunction or involves some sort of necessity . These problems are independent of subjective and objective points of view. And the difference between subjective and objective viewpoints also arises in other areas and yet philosophy is not the result—as with our different perspectives on the physical world. The perceptual perspective coexists with the abstract mathematical perspective and yet we don’t sense a deep philosophical problem here, because we know that we are conscious beings embedded in a physical universe, so there should be different ways of apprehending the physical world. The mere difference between subjective viewpoints and (more or less) objective viewpoints is not sufficient to generate a distinctively philosophical problem.[2] So these materials don’t provide an adequate explanation of the existence of philosophy, though they are no doubt relevant to some philosophical problems.

            One salient aspect of philosophy seems characteristic of it, namely that we have no surefire method of verification (or falsification) for philosophical claims. We don’t have the method of experiment or sensory observation, nor can we use the method of proof (as in mathematics). All we have are “intuitions” and “arguments”: we don’t have empirical observation, measurement, and calculation. No wonder we face unsolvable problems—we don’t have a method for solving them! We have questions but we don’t have procedures for answering them comparable to those used in other disciplines. There are two replies to this diagnosis. First, we shouldn’t underestimate the methods we do use in philosophy, such as the thought experiment and logical deduction; these methods can lead to genuine philosophical knowledge, typically by producing counterexamples to philosophical theses. And we shouldn’t overestimate the verification methods of other disciplines such as psychology, linguistics, history, and literary theory. A lot remains unverifiable in these fields, yet they don’t count as philosophy. Second, what if astronomy, physics, and chemistry lacked the methods of verification they now possess—would that make them into branches of philosophy? It is only contingent that we can verify claims in these fields, and removing the verification procedures doesn’t convert them into parts of philosophy—they would just be unverifiable science. It has to be something about the nature of the problems as such that makes them into philosophical problems, not the availability or otherwise of verification procedures. But what is that? It is true enough that philosophy lacks apodictic methods, but that seems neither necessary nor sufficient to constitute a subject as philosophical.

            Maybe the problem is epistemic: we suffer from a serious epistemic gap and this is what causes philosophy to exist. Not to put too fine a point on it, we don’t know what we are talking about. Our knowledge of reality is glancing and pragmatically based, geared towards specific practical ends, not designed to reveal the whole truth about the universe, and we labor under this constitutional limitation. For example, there are philosophical problems about color—is it subjective or objective, real or unreal, relational or non-relational? These problems are hard to resolve, though we see colors all the time and have concepts and words for them. It seems that we just don’t know what colors are—what constitutes them. We know what wavelengths of light are (up to a point), so there is no comparable philosophical problem about wavelengths; but we really don’t grasp what colors are, though we are aware of color appearances. Likewise, we know what neurons are but we don’t know what consciousness is—only the way it appears to us. We know there can’t be neurons without brains and chemicals, but we don’t know whether there can be consciousness without brains and chemicals—though we may have firm philosophical opinions on the matter. We know what syntax and phonetics are, but not meaning. We know what ethical behavior is (again up to a point), but we don’t know what ethical value is. We know what digestion is, but we don’t know what knowledge is, though both are attributes of a living organism. We know all about muscle contraction, but we don’t know what freedom is—we have no science of it, no physiology. So the things of philosophical interest are the things that fall outside of our epistemic capacities in some crucial respect (not in all respects). And if you don’t know what something is, you are not going to have very good theories of it, especially if your ignorance is principled and systematic. Why is time so philosophically problematic? Because we really don’t know what time is—that’s why. Thus we are led to entertain the Epistemic Gap theory of the existence of philosophy (precedents of it may be found in Kant and others).

This may seem to answer our puzzle, though it would need a lot of spelling out, but there is a nagging question that remains, viz. why are we thus ignorant? Only if we knew the answer to that question would we know why philosophy exists. Someone might say, “Of course we don’t know what these fundamental features of reality are—that’s precisely why we are philosophically at sea—but that just restates the question, i.e. why do philosophical problems exist?” And indeed it is a real question why the things that are so familiar to us are so removed from our understanding: why should time, say, be so elusive, so maddeningly opaque? We don’t even know whether it can exist in a world without change! Our ignorance about what things are seems gratuitous, strange, and hard to explain. Beings not afflicted with such ignorance might well not recognize philosophy as a separate subject, finding it all plain sailing, so there must be a question as to why we are so afflicted—but we have no good answer to that question. Maybe it has something to do with evolutionary demands (doesn’t everything?) but that is at best a theory sketch or chapter title. And then there is this question: is it really credible that knowledge of what these things are would instantly terminate philosophical uncertainty? Could it not simply accentuate the problems? Now that we see exactly what consciousness is we are struck dumb about how it relates to the brain (just as Descartes was when he concluded that the essence of mind is thought not extension). Our philosophical ignorance is really a very peculiar thing, not at all easy to penetrate: we have no idea of what gives rise to it. Even if the Epistemic Gap theory is true, the gap is quite mysterious, unlike other epistemic gaps, an incomprehensible type of ignorance. So we don’t know why philosophy exists—why the human mind is susceptible to it. Why is there such a thing as philosophy?

[1] See The View From Nowhere (1986). I’m not saying Nagel explicitly adopts such a view of the nature of philosophy, but the view might be inspired by his ideas.

[2] Subjectively the Muller-Lyer lines look unequal in length while objectively we know they are equal, but this is not a philosophical problem.  Subjectively the Sun appears to rise in the morning while objectively we know it doesn’t rise at all, but again this is not a philosophical problem.

Share

Psychological Economics

 

 

Psychological Economics

 

Economics tells us the relationship between supply, demand, and price: the higher the supply the lower the price; the higher the demand the higher the price; the higher the price the higher the supply; the lower the price the higher the demand. But what are supply, demand, and price? If by supply we mean the quantity of goods actually available, then the law breaks down in conditions of ignorance: people will not pay a certain price for a good if they don’t believe it has a certain level of scarcity. If you believe potatoes are a scarce commodity, you will pay highly for them (given a certain level of demand) even if they are not in fact scarce; and if you believe that diamonds are common, you will not pay highly for them even if they are in fact scarce. So the law of supply should really be a statement about perceived quantity not actual quantity: price is a function of the perceived amount of a particular good not the actual amount. In conditions of ignorance objective quantity and perceived quantity can come apart, and then price follows perceived quantity. Such ignorance is not uncommon and may be relied upon by suppliers (“Quick while supplies last!”). The underlying law is psychological not psychophysical, and it is robust.

            What is demand? Not overt behavior as such but desire: how much people desire a particular good. If people desire something a lot, they are willing to pay more for it; if less, they are willing to pay less. So price is a function of desire: the more desirable the more expensive. Putting the two laws together, we can say that if people desire a good G and believe G to be in short supply, then they are willing to pay a higher price for it than if they don’t desire it or believe it to be readily available. Two psychological variables conspire to generate a given price. But what is price? Not just the amount of money (legal currency) a person is asked to pay, since bartering transactions also count as economic—here price would be the amount of a certain good you would be willing to give in exchange (a pint of milk for a bushel of hay, say). But what determines what you would be willing to give in exchange? Clearly it is the sacrifice you would make of other desires you might satisfy given that you make the exchange in question. The more money you give for G the less you have to buy G’, which you also desire. Price is really the amount of desire satisfaction you agree to sacrifice; it is defined in terms of desire dissatisfaction. So the price variable is also psychologically defined. The law of supply thus says that people are willing to have certain desires not satisfied as a function of their beliefs about the scarcity of the good in question (given a fixed level of desire for that good). The law of demand says that people will sacrifice more of their desires the higher their desire for a particular good is, i.e. pay more for it. All of this is purely psychological—supply, demand, and price. The operative economic law is a psychological law relating beliefs and desires. People have beliefs about how rare goods are, as well as desires for those goods and dispositions to favor some desires over others—these psychological facts determine their economic behavior. Economics at the basic level is the study of how these psychological variables interact.[1]

            And not just people, animals too. Suppose a hungry tiger spots a gazelle and wonders whether to give chase: she has a certain level of desire for gazelle flesh and she is aware of the price she will pay by giving chase—exhaustion and the likelihood of injury (she doesn’t desire either of these things). Will she pay that price? Not if she believes gazelles are plentiful and available, and therefore can be obtained at a lower price. She exemplifies the same psychological structure as a human economic agent: supply, demand, and price (desires that will be sacrificed by satisfying the desire for gazelle flesh).[2] She doesn’t tend to go after big fast gazelles because the price will be higher to obtain their flesh, so she makes a calculation about the gazelle before her. Animals are subject to the same “economic” laws as humans when it comes to obtaining goods that incur a certain cost (as in climbing a tree to obtain the luscious fruit near the top). None of this has anything essentially to do with hard currency, industry, exchange rates, banks, etc. Economics is fundamentally about desires, actions, and beliefs regarding availability (especially community-wide beliefs). What sacrifices will I make in order to satisfy a desire, given my beliefs about the availability of the means of desire satisfaction? That is, what price will I pay, given my level of demand and my beliefs about supply? The price a vendor can charge is conditioned by the degree of demand for his product and the buyer’s beliefs about the scarcity of that product. All this proceeds at the level of psychology, so the laws of economics reduce to psychological laws. Economics is a department of psychology—the department concerned with satisfying desires in a social group.

Colin McGinn

[1] There is perhaps some resistance to this way of thinking among economists because it makes their discipline “subjective”, or concerned with the “private”, so they prefer to conceive of it in terms of objective “physical” things. But this is a complete distortion of what economics is really all about—a misguided attempt to emulate physics.

[2] It is true that the gazelle is not a voluntary participant in this interaction, unlike in a typical economic exchange, but that is irrelevant to the laws of supply, demand, and price that the tiger is subject to. These apply whether the other agent benefits from the interaction or not. If I am debating whether to buy a certain car at a certain price, I am only concerned with my level of desire and my beliefs about the scarcity of cars; I don’t care whether there is another agent who will benefit from my purchase. The laws of supply and demand are individualistic in this sense. Since these laws form the core of economics that science reduces to psychology in the manner described.

Share

America: A Theory

 

 

America: A Theory

 

Gotten: Americans say it, the British don’t (nor do Australians and South Africans). One might suppose that Americans started saying it some time after the first British settlers landed in the New World, thus marking themselves as different from their British forebears. But this is wrong: the British were already saying gotten when they got to America (Chaucer, Shakespeare, et al) and the settlers simply continued the tradition, while the British stopped saying it. So subsequent Americans were actually more like the original Brits than later Brits were in this respect. The reason, evidently, is that linguistic forces or fashions operated in England to change the language, which were not operative in the new country. American English was isolated from these forces and so stayed the same. America didn’t add; England subtracted (rightly or wrongly). This kind of phenomenon obviously has wider application: an animal species might spread to a location in which it remains essentially the same while the original population evolves into something different; art forms could persist in a new setting while in the old setting they undergo change; social customs might continue elsewhere while disappearing at home. The original location may be susceptible to influences that don’t apply in the new location, especially if the new location is geographically isolated. Just to have a label let’s call this the Regressive Relocation Effect.[1] Of course, relocation might have the opposite effect: the new place will occasion changes not occurring in the original place, depending on the forces operating there (the Progressive Relocation Effect). But in the right circumstances we could have stasis in the new location combined with revolution in the old location. In an extreme case the new location could find itself hundreds of years behind the old location as the years roll by—still riding horses, wearing top hats, flogging miscreants, etc. Change happens for a reason and the reasons that apply at home may be absent abroad. A geographically new country might thus become a culturally old country as time passes. It might, as we say, become stuck in the past. In such a case citizens of the new country might be more like past citizens of the old country than present citizens of it are. That is what happened with gotten.

            This suggests a theory: America is more like old England than present England is in certain important respects. The America of today is more like the England of 1607 than the England of today is. Not in all respects, obviously, since many events took place in America in the subsequent years that did not take place in England, specifically the waves of immigration from many countries that transformed the culture of the United States—not all of American culture reflects the culture of Great Britain in the seventeenth century. But certain aspects of American life stand out as similar to earlier phases of British life: I am referring to puritanism, racism, tolerance for violence, the penal system (including the death penalty), philistinism, and animal cruelty. These are remnants of conditions that obtained long ago in British society but which have gradually been removed or diminished by a range of influences. Great Britain is itself a complex society full of competing forces, and it is geographically close to other European and Scandinavian nations, as well as having ties to what is called the Commonwealth (ironic name), so it is open to influences that don’t impinge directly on the United States. There are agents of change acting on Britain that haven’t acted on America, or not to the same degree. Accordingly, America has not changed so much in the respects indicated compared to Britain. The persistence of slavery and allied institutions is the obvious example: America supported slavery for longer than Britain, thus exhibiting a similarity to an older Britain that outlived that of Britain itself. Americans were “more British” than the British when it came to slavery (and most of the slave owners were themselves of British descent). The same can be said of the other societal traits I mentioned: puritanism persisted with less opposition in America, as did claims of racial and national superiority, as did penal practices. England became more civilized, less barbaric, as time went by (to put it normatively), while America retained more of the old outmoded ways. Of course, this a matter of degree, and America may be more advanced in other respects, but it seems clear that it contains elements of a culture that has withered away more decisively in the old country. Let me put it bluntly: Americans have the mentality of Englishmen of a century ago (plus or minus a bit). You can see this in prevalent attitudes towards science and learning, in religiosity, in an acceptance of violence as a way of life, in the virulence of white supremacy, and other traits. There is, I’m sorry to say, a fundamental lack of sophistication in the American mind, akin to that which obtained in earlier iterations of the British mind (and which still exists in parts of the latter mind). The reasons for this are no doubt complex, but geographical isolation must surely rank high—nothing proximate is forcing the American mind to change. It is why, to many Europeans, America seems like a savage and barbarous place—technologically modern but spiritually and morally backward. America, it is felt, should really be more civilized than it is (no universal healthcare, for example). My theory is that this fact results from the Regressive Relocation Effect—it is gotten writ large. Americans are actually more British than the British—more like the British of yore. Britain has changed more in the last four hundred years than America has, owing to a variety of internal and external forces. British culture has been more porous than American culture during this period. The American psyche is accordingly more rooted in its historical connections to old England than the contemporary English psyche is. Of course, both are rooted there still, but in America the roots go deeper (notice the strange reverence for the British royal family in the USA). The British in Britain have been more psychologically deracinated than the British in America during the last few centuries.

            If I were to single out the historical event, or series of events, that mainly caused this divergence, I would cite Britain’s gradual loss of empire. This loss wrought huge changes in the mindset of the British, but nothing comparable has ever overtaken the USA. America still has considerable influence in the world (despite some nervousness about China) and so faces little threat to its global power, whereas England has had to accept its vastly diminished status. The result is greater humility in the one and continuing arrogance in the other (nothing compares to American self-righteousness). America today is like England at the height of its colonial power and cultural clout, just so pleased with itself. England has changed tremendously in this respect since the first English settlers arrived on American soil, but America has not suffered such changes, so it has not had to adjust to them. It could therefore continue in the same old way and feel justified in doing so. The American mind is still like the British mind of its colonial heyday, and descends from it, while that type of mind has virtually disappeared from British shores. But I must add that the loss of empire is just one source of historical change and doesn’t explain everything about the divergences between the two countries over the last few centuries. The point I have wanted to urge is that the cultural divisions between the two countries are the result of historical continuity across continents not cultural discontinuity: America didn’t part ways with England; England did. America is England suspended in time. The sources of cultural innovation in America were largely brought by other immigrants (voluntary and involuntary) not by descendants of the original British settlers; the British simply persisted in their old ways (puritanical, punitive, xenophobic). This is why American culture is as fragmented as it is (inter alia). By contrast, the British in Britain were forced to change their ways by a variety of circumstances: geographical location, internal dynamics (class warfare, industrialization, propinquity with other countries, etc.), and loss of empire. America is now more traditionally British than Britain (and not in a good way). The cure for its ills is therefore to stop being so tied to old England. The revolutionary war was never properly completed. This is quite compatible with recognizing that Britain has also had some good effects on American society, but along with these have come various bad legacies, with which we are distressingly familiar. The British may hanker after the past (Brexit etc.) but Americans are the past—what Britain used to be like. Americans are still saying gotten.[2]

 

[1] The effect derives from the Law of Societal Inertia: societies don’t change unless a force is applied to them that makes them change. All societal changes require positive interference; there is no spontaneous change. The interfering factor can be economic, military, moral, scientific, religious, etc. To continue the analogy with physics, America is like a closed system stably preserving its initial state over time.

[2] Compare the metric system: Britain was forced to adopt the metric system in the late 1960s and not without considerable reluctance, but America has never followed suit, feeling no pressure to make the change. It isn’t that Britain always had a metric system and American innovation invented the non-metric system; no, it just clung on to the old system. Similarly with capital punishment: it’s hard to stick with this barbarity when all around you are abandoning it (continental Europe), but in America there is no one around you pushing for a more humane legal system. I could go on: racial segregation, factory farming, lack of universal healthcare, massive income inequality, punitive drug laws, gun proliferation, police brutality, and so on. America is exceptional in these ways, partly because there is no pressure from surrounding countries (Canada and Mexico not having the requisite clout). Nor do we see any rebellion against the past such as we see with countries once ruled for centuries by monarchs. Thus America finds it easy to be complacent about its British heritage (no one in America hates the British, despite that war for independence). America is still dominated by its British inheritance, but that inheritance dates back a long way and has not been subjected to the kind of criticism that has radically altered it in its place of origin.

Share

Footnote to “Notes on Nonsense”

[1] There is certainly something liberating and amusing about nonsense: hence the popularity of the likes of Edward Lear and Lewis Carroll. Nonsense has its value, its virtues. It is hard to define, but we know it when we see it. It isn’t the same as mere impossibility, but is closer to the notion of intelligibility, itself hard to define. The OEDgives only “words that make no sense” for “nonsense”: this leaves it up to us to define what “making sense” means. That concept seems multifarious and vague. A taxonomy of nonsense would be useful.

Share

Notes on Nonsense

 

 

Notes on Nonsense

 

We don’t talk about nonsense enough. Let’s show it some respect. Nonsense belongs to language not reality: there are no nonsensical facts or objects or properties; there are only nonsensical words or strings thereof. Reality itself is completely…what? We have no word for the opposite of “nonsensical”—the word “sensical” exists neither in ordinary discourse nor in the OED.[1] I don’t know why this is; it ought to make perfect sense (be sensical). In any case, reality lacks the property of being nonsensical. With respect to language we can distinguish two kinds of nonsense: the grammatical and the ungrammatical. The grammatical kind is exemplified by “Colorless green ideas sleep furiously” and “Twas brillig and the slithy toves did gyre and gimble in the wabe”; the ungrammatical kind is produced by flouting the rules of grammar, as in “It up dog random” and “Rainbow the over”. The first thing to notice is that nonsense exists in close proximity to sense: it’s easy to get from sense to nonsense. The same mechanisms that generate sense can generate nonsense: either rules of grammar or simple word concatenation. The grammatical nonsensical strings obey normal grammatical rules and merely juxtapose clashing semantic units, or else employ nonsense words in a grammatical form. The ungrammatical cases simply join perfectly meaningful words that don’t clash with other words. Nonsense doesn’t arise by going completely outside the normal workings of language; it occurs within language. This is why it is wrong to describe nonsense as simply lack of meaning or sense: random squiggles or sounds are meaningless (like bricks and mortar) but they are not instances of nonsense. Nonsense presupposes functioning language. In fact nonsense is a type of meaning not a lack of meaning—the nonsensical type. There is a lot of meaning in the sentences I gave earlier; they are not semantically lifeless. They have, we might say, meaningless meaning—a second-class, degraded kind of meaning (“para-semantic meaning”). What the sentences express is neither true nor false—it is not “propositional”—but it is imbued with meaning of some sort. This makes them puzzling from a theoretical perspective: how do standard theories of meaning apply to them? How, say, do truth conditions theories of meaning apply to grammatical nonsensical sentences, or Gricean theories, or use theories, or verification theories? Here we seem to have a type of meaning that violates all such theories.

            The question becomes sharper when we ask whether nonsense possesses a logic. In the case of ungrammatical nonsense we can rule this out, since logical relations need at least the semblance of statement making; but it is clear enough that grammatical nonsense exhibits logical properties. Such sentences can be conjoined, disjoined, negated, and put into conditionals; and the normal logical rules will apply. For example, conjunction elimination is still valid, or modus ponens. From “All toves are slithy” and “Socrates is a tove” we can infer “Socrates is slithy”: the form of the sentences allow this inference irrespective of the content. But we can’t say that validity here is a matter of truth-preservation, because nonsensical sentences are never true (or false). The meanings (or quasi-meanings) entail one another, yet there is no truth-value to be preserved. We know that if green ideas sleep furiously then they sleep, but it is neither true nor false that green ideas sleep furiously. So is our usual approach to logic too narrow? Do we need a special logic of nonsense? Some have urged the need for a “para-consistent logic”; do we also need a “para-semantic logic”? We need a logic that can handle category mistakes (“The number 2 is cheerful”) and analytic falsehoods (“Janet is a happily married bachelor”); and it looks like we also need a logic that can handle outright nonsense. It is possible to reason with such sentences, so they ought to fall within the scope of logic. But if nonsense has a logic, it must be meaningful.

            Can nonsensical expressions refer? Or better: can speakers refer using nonsensical terms? Can we contrive a Donnellan case in which a speaker picks out an object for an audience even though the term used is pure nonsense? Sure we can: someone may remark at a party, “The slithy tove in the corner is a famous philosopher”, thereby picking out an individual of slippery but dapper appearance (or just a guy known to like the works of Lewis Carroll). The definite description “the square root of Paris” should receive the same semantic analysis as “the Queen of England”: semantically these are expressions of the same general category. The demonstrative “that colorless green idea” functions as a singular term subject to a Kaplan-style analysis despite its nonsensical status. Couldn’t we introduce a proper name “Zippy” by stipulating that it denotes whatever “the square root of Paris” denotes, viz. nothing? True, there are no nonsensical existing entities for such terms to refer to, but language doesn’t know that; it allows us to generate nonsense expressions of all semantic categories. Some of this nonsense may even have a use—a role in a language game—and may even be used to refer to ordinary things, as with that slippery dapper chap in the corner. Nonsense does not preclude acts of reference and other linguistic practices—whole books may be composed of it (Finnegan’s Wake). And nonsense poems are precisely poems.

            Now that we have a feel for the reality of nonsense (if I may put it so) we can pose some more adventurous questions. Could there be an ideal language that precluded the possibility of nonsense? It is hard to see how there could be: language allows for unlimited grammatical (and ungrammatical) combinations—that is part of its inherent creativity—and this will always permit the possibility of nonsensical combinations. Hence the proximity of sense and nonsense in the mechanisms of language production; the same thing is capable of producing both. Nonsense is as embedded in language (as a formal apparatus) as sense is. We don’t have much use for nonsense most of the time, but it is always latent in the linguistic system: “colorless green ideas” is as much part of language as “bright red flowers”. The notion of a logically perfect language incapable of producing nonsensical monsters is therefore an illusion. Grammar is nonsense-neutral (of course not with respect to ungrammatical nonsense). We could even say that meaning itself is nonsense-neutral, since nonsense sentences have a kind of meaning. Then here is a second (vertiginous) question: Is there room for a brand of skepticism that questions the meaningfulness of our discourse? I don’t mean the familiar positivist-Quine-Kripke-Wittgenstein types of semantic skepticism, but rather a type of skepticism that dangles the possibility that we are talking nonsense all the time, despite our belief that we are making sense. Some of Lewis Carroll’s characters talk nonsense without knowing it—might we be in like case? That is, is the belief that we are talking sense fallible? I say, “Little white lambs sleep peacefully”: have I said something analogous to “Colorless green ideas sleep furiously” without knowing it? Don’t I sometimes say things in my dreams that I later realize were nonsensical? Might I be dreaming all the time and hence possibly talking nonsense constantly without knowing it? It sounds impossible to maintain that my simple utterances might actually be nonsense, but the skeptic is a resourceful enemy: can I be certain that “It’s raining” isn’t nonsense? Is this as certain as the Cogito? Maybe my brain is generating nonsensical strings and then disguising them as making sense. The mind can play peculiar tricks. Haven’t some people been totally convinced that their utterances make sense and yet upon examination they turn out to be nonsense (the holy trinity, Newtonian absolute space and time, the unrestricted concept of a set)? Nonsense can be a sneaky thing. So maybe no one has ever said a “sensical” thing ever—all is nonsense. In the beginning was the nonsense word. Maybe “I think, therefore I am” is itself a piece of nonsense! Judgments of what makes sense don’t seem immune to skeptical doubt. The Wittgenstein of the Tractatus was fond of saying that many of our ordinary utterances are strictly nonsense—what about an ultra-Wittgenstein who thinks that all our utterances are really nonsense? It’s all “slithy toves” and “colorless green ideas”.

            It has been maintained that a principle of charity must govern our practice of interpretation: we must find the other largely logical and truth believing. Some have wished to weaken this to a principle of humanity: we must find the other rationally explicable, though not necessarily logical and truthful. But we can picture a further weakening to allow for the possibility of the nonsensical other—the alien who talks a lot of nonsense, perhaps complete and total nonsense. Couldn’t we be forced to conclude that the linguistic behavior of the alien consists mainly, or wholly, of sheer nonsense? Isn’t this what Alice concludes about some of the aliens she encounters through the looking glass? I don’t see why not: perhaps our target tribe has a malfunctioning brain (by our standards) that produces only nonsense; perhaps they utter nonsense all the time just to amuse themselves; perhaps there is a religious taboo prohibiting sensible utterance. In any case they spout nothing but nonsense from dawn till dusk: it’s all jabberwockies and bandersnatches, relieved only by the odd colorless green idea. There need be no assent to any of this nonsense on their part, no “holding true”, and hence no route from such data to an assignment of meaning; still, they do mean something by their utterances, even if we deem it a kind of substandard meaning. Their spoken language has meaning of a sort—it isn’t devoid of all semantic content—but it doesn’t map onto ours in the way envisaged by proponents of the principle of charity. As Wittgenstein would say, they play a particular language game, and within that game language has a use, a purpose. They may think perfectly sensible things, and know that others do too, but their actual speech is knowingly made up of nothing but nonsense; they may wonder at the pedestrian verbal ways of sense-making speakers such as ourselves.[2]

            Nonsense is a kind of robust semantic presence not a mere semantic absence. It isn’t the lack of meaning but a special type of meaning. Philosophers of language have gradually expanded out from what they conceived to be central cases of meaning (usually verifiably true sentences) to other types of meaning (imperative meaning, performative meaning, context-dependent meaning, emotive meaning, etc.); I am suggesting we expand out a stage further to include nonsensical meaning. Even Wittgenstein, with his inclusive notion of the language game, didn’t see fit to include nonsense as a legitimate form of meaning, but there are good reasons to bring nonsense into the semantic fold. Talking nonsense is one form of talking, one way that language manifests itself. And nonsense is as much part of language as sense; indeed it exemplifies the creativity that is the essence of language. Perhaps we should do more of it.

[1] Actually the word “sensical” is not unheard of, but it is not generally accepted as part of the English language.

[2] Could there be a completely nonsensical conceptual scheme? Now that is pushing it: how could the language of thought be composed of nothing but nonsense? Could all thought be inherently nonsensical? In our case there is always a bedrock of sense on which nonsense is parasitic, but in the case of the nonsensical conceptual scheme it is nonsense all the way down. This is hard to make sense of (perhaps it is a species of nonsense). 

Share