Elements of Economics

Elements of Economics

Economics is aptly defined as the science of scarcity (we could say “systematic study” if “science” seems too strong). Philosophy of economics is then the philosophy of scarcity, or of the science thereof. It is in this vein that I write the present words. What, then, is scarcity? Here is a good definition: “Insufficiency of supply; lack of availability, esp. of a commodity, in proportion to demand; shortage” (Shorter OED). It does not mean the same as “rare”, which means “occurring very infrequently”; “scarce” connotes a lack of quantity relative to demand, not just statistical infrequency. Things can be rare without being scarce in the economist’s sense (e.g., the excrement of a dwindling species), because there is little or no demand for the thing. When we speak of demand, we don’t mean primarily a speech act of demanding (“I demand a pound of apples!”); we mean desire or need—preference, utility, degree of wellbeing. We mean a psychological state, expressible in demand behavior. So, we can say that scarcity is proportional to desire and availability: if desire is high and availability low, then the commodity is scarce; if desire is low and availability high, then the commodity is not scarce. Some commodities are not scarce at all: space, time, air, ground to walk on—we have as much of these things as we want (in normal circumstances). Economics is the study of scarcity in this sense: human action in relation to commodity shortfall (I include services under “commodity”); it is about what happens when scarcity obtains. If there is no scarcity, there is no economics, either the science or the behavior. Economic action is action under conditions of scarcity, not plenty. There is no economics in heaven; economics results from finite resources, insufficient goods, shortages of supply. (In hell by contrast everything desirable is scarce and everything undesirable is plentiful, especially pain.) In the real world, scarcity is a natural fact, a fact of nature, and economics deals with the human response to it.

It might be retorted that this definition is both too narrow and too wide. Too narrow because of non-human economic agents (animals, aliens), and too wide because it includes types of behavior not normally classified as economic. The first complaint is entirely correct: we want to make room for animal economic action, though it may be primitive and disputable (e.g., cleaner fish symbiosis), and clearly non-human intelligent aliens could have economies. So, let’s expand the definition to include non-human creatures that live under conditions of scarcity (though I will continue to speak of human agents for convenience). The second complaint is more difficult to resolve and calls for some revision of intuitions and common speech. Consider a predatory tiger hunting for food: the food is scarce relative to the tiger’s desire, but should we say that chasing and killing the prey is an economic act? The tiger is responding to scarcity in a desirable commodity, and is making a sacrifice in order to obtain the commodity in question (“paying a price” in energy expenditure and risk); but the prey animal is not doing anything similar—there is nothing in it for it. There is no exchange of goods, no voluntary mutually beneficial transaction; there is just the predator forcibly taking the life of the prey (compare simple robbery). So, is this an example of economic activity? There is scarcity, need, actual cost, and opportunity cost, so far as the tiger is concerned; but the prey animal is operating under no such set of conditions—it is simply running for its life. If the prey animal were to offer part of itself to the tiger, thus sparing the tiger the cost of chasing it, while preserving its own life, then we might have a case of economic exchange; but no such thing is going on as things actually are. I propose that we describe such a case as “asymmetric economic behavior”: it is an economic calculation for the tiger, but not for the prey. There is no economic exchange, but one party is acting under economic imperatives—coping with scarcity by sacrificing some of its own well-being (its own wealth, we might say). In the same way a human robber is acting under economic constraints by responding to scarcity by acts that carry costs (including opportunity costs). He is thinkingeconomically. True, this is not your typical purchase, but it does meet the conditions for cost-benefit calculation under scarcity. We can then define normal economic exchange as “symmetric economic behavior”, acknowledging a real distinction within the class of economic actions. The case is similar to the question of whether economic action is possible for an individual in isolation: can there be a “private economy”, an economy consisting of a single individual? It might seem that this ought not to be possible, but further reflection suggests that it is just an outlying case not typically encountered but conceptually possible. For consider a man alone on a desert island providing for himself: he lives under conditions of scarcity and must weigh his productive options—should he hunt, fish, farm, or just pick fruit? How much effort should he put into constructing his dwelling? He is a producer and consumer with limited resources and finite supplies of energy; he must provide for his future self under the same constraints as apply to interpersonal production and consumption. He is semi-economic man, but recognizably engaging in economic behavior: he is coping with scarcity by standard economic means—capital, labor, costs, production and consumption. He doesn’t buy anything from anyone else, but that is not essential to economic activity: the key point is that he is responding to scarcity by adopting economic methods. He knows his supply and demand, and he pays the price in terms of labor (he pays for desire satisfaction in the currency of labor, mental and physical). The basic elements are there. In particular, he must always be conscious of opportunity costs, just like any spender of money in a department store; he is always trying to get the best bang for his laborious buck. He plans ways to maximize his utilities given the prevailing scarcities. He may come to the conclusion that hunting for meat is just not economical given the realities of the chase; he is better off fishing or fruit-picking. The OED defines “economics” as “the branch of knowledge concerned with the production, consumption, and transfer of wealth”; by that definition the solitary individual can be classified as part of the subject-matter of economics (he even transfers wealth to his future self in the shape of stored food and re-usable tools). We should not let our conception of the economic be too tightly defined by reference to post-industrial money-centered economies; the basic concepts are more general, more primordial.

Inflation is not itself an essentially monetary phenomenon. Even in a bartering economy inflation can occur. A glut of apples will decrease the purchasing power of apples, as will the introduction of cheap labor into an economy (either wage labor or slave labor). You may find yourself having to do more to obtain less—work longer hours, skip vacations. It might be necessary to fight inflation by reducing the supply of apples or cheap labor (cf. printing less money). The same laws of supply and demand apply to money and to commodities. Banks don’t depend for their existence on money either: you could deposit your commodities in a bank too, earning interest on them, as the bank loans out deposits to interest-paying borrowers. There could be bank runs and financial crises (banks don’t have enough apples on deposit to pay for all the apple withdrawals). Money is just a contingent feature of economies. You could be taxed on your apple possessions and even on your labor resources (e.g., everyone has to pay a flat tax by working on the roads a few hours a week). A society could teach economics to students without even mentioning money (they don’t use the stuff); money is just one form that economic action takes. Economics is scarcity science not money science. When the singer sings that all he wants is money (“Money don’t get everything it’s true, but what it don’t get I can’t use”), he misspeaks; what he means is that he wants the opposite of scarcity—he wants not to be short of what he wants. Poverty is scarcity, desire dissatisfaction (“I can’t get no satisfaction” is more accurate), not a lack of funds.

There is another verbal deformation endemic to the way we talk economically: the division into consumer and producer. You would think there are two classes of people in an economy—those who produce (capitalists, manufacturers, entrepreneurs) and those who consume (customers, buyers, clients). But this is a misconception: everyone is both a producer and a consumer in the very act of economic exchange. First, note that sellers are not always makers of products (concrete consumer goods); people also sell their labor, their expertise, their talent. Philosophy professors are producers, as are actors and doctors and lawyers (et al). Nor is all consumption consumption (food being the paradigm): we “consume” lectures, pop songs, dental treatment, massages. We might better speak of “makers” and “takers”. Then we can say that every act of making is an act of taking, and vice versa. When you give me something I also give you something: you sell to me, but I also sell to you—in that very act. For example, I go to the supermarket to buy groceries: I buy from them, but they also buy from me—they buy my money with their foodstuffs. In exchange for my money, they give me food: they bought my money with their food. My money represents my labor, so they are buying the fruits of my labor with the fruits of their labor. They are consumers of my goods (in the form of money) as I am a consumer of their goods. The relationship is completely symmetrical. It is not that I am the passive (idle) consumer while they are the active (hardworking) producer; they are also the passive consumer of my money, in which I was actively productive. Everyone is simultaneously producer and consumer. I could go into a supermarket and say, “Would you like to purchase some of my money with your food items?” and not speak amiss. It is just a convention that we speak as we do, but it masks the realities of economic life. The relationship is not like speaker and hearer; in economic exchange every act is both an act of consumption and an act of production. The supermarket consumes my money in the very act of my consuming its food. One role entails the other. The two roles are indissoluble. So, it would be quite wrong to say that economies are divided into producers and consumers. Again, the model of industrial economies has too great a hold on the way we understand economic reality; and the institution of money disguises the nature of economic exchange. If I went into the supermarket ready to give philosophical lectures for food, that would convey the realities more clearly; the medium of money is incidental to the proceedings.

Economic reality should really be conceived as split-level. At the level of logical form, so to speak, it is all about scarcity, desire, supply, demand, markets, cost (actual and opportunity), production and consumption, conserving and using, materials and labor; but at the level of existing speech, as it were, it superimposes money, high-street banks, stocks and shares, exports and imports, bosses and workers, profits and taxes—all the apparatus of modern economies. I have tried to sort out the essential and foundational from the contingent and superimposed. Of course, it is worthwhile to study contemporary encrustations, but also worthwhile to distill out the underlying realities. Apart from anything else, it makes the subject more interesting. The science of scarcity need not be the “dismal science”.[1] Current expositions make it sound dry and dull, forbiddingly technical, remote from the human condition; but viewed more philosophically it connects with human life at a more visceral level. We live in a world of scarcity, a world brimming with unsatisfied desire. That is the basic subject-matter of economics, scientific and philosophical.

[1] Thomas Carlyle didn’t mean by this phrase that economics is dismal qua science (though that is often what it means to people today); he meant that it contains depressing truths about the human condition. I would describe it as a “pitiful science” in that it evokes pity for the life of man (and other animals) simply because scarcity is a terrible burden. Its most immediate impact is hunger. Hunger is the central fact around which economics is built.

Share

Bounds of Space

Bounds of Space

The Kant-Strawson thesis is that all possible experience is spatial in character (Strawson calls it the “spatiality thesis”). That is, all appearances are spatial appearances—of extended things existing in an ordered unified Euclidian space separate from the mind. This is how experience makes things seem, even if they are not objectively (noumenally) that way. Then the claim is that we can make nothing else “intelligible to ourselves”—these are the bounds of sense (as opposed to nonsense). Nothing else is even meaningful (to us). This is a bad way to put the point: we can’t make the bat’s sonar experience “intelligible to ourselves” but that doesn’t mean that it (bat experience) is intrinsically unintelligible. It is merely a point about the limits of our imagination and hence knowledge, not about what is logically or conceptually or metaphysically possible. Given that ourexperience is always spatial, we might well not be able to understand any other form of experience, but it doesn’t follow that any such alien experience is impossible. And that is the philosophically interesting Kantian thesis, not the thesis that we are imaginatively limited in certain ways. The latter is a thesis about our cognitive powers not about the metaphysics of experience. So, let’s drop the offending formulation and speak simply of what is really possible, whether we can comprehend it or not. Is the spatiality thesis then true? There are two basic questions: (a) whether every instance of sense experience involves spatial representation, and (b) whether every aspect of an instance of experience is spatial in character. That is: does every experience have some spatial content, and is every aspect of experiential content spatial? The strongest Kant-Strawson thesis would be that every logically possible experience has spatial content and that every aspect of experiential representation is spatial. A weaker thesis would be that all experiences are spatial in some respect but that in other respects they are not spatial: for example, visual experience has spatial content (lines, volumes, shapes) but it also represents color, which is not itself a spatial attribute (I will come back to this).

It seems hard to deny that ordinary human visual experience has spatial content (surely the sense that Kant and Strawson were focussing on). There are, however, marginal cases that might provoke doubt, as with sudden flashes of light generated by the brain from within its own depths, or the kind of sensation we have when our eyes are closed in the dark. Could alien perceivers experience such visual sensations more systematically—a non-spatial world of formless color? Might the first color experiences in the womb represent color non-spatially? Certainly, they might not contain the full range of spatial attributes common to adult visual experience. The question seems debatable; our visual imagination of color seems relatively free of spatial ingredients, so maybe it would be possible to have a form of visual sense experience that proceeds without spatial representations, or has very attenuated ones. Anyway, the real challenge arises from the other human senses, not to mention animal senses that we don’t possess—particularly, smell, taste, and hearing (or electrical and magnetic senses in certain animals). Consider perception of pitch and sound intervals, of sweet and sour tastes, of fragrant and noxious smells: where is the space in all this? It may be that spatial concepts intrude on these sense modalities from elsewhere, but it is hard to deny that they are not intrinsically spatial. Couldn’t there be a being that experienced sounds, tastes, and smells but had no perception as of things extended in space—a space-blind perceiver? Even if the external stimulus was a spatial object, it wouldn’t follow that it was perceived as such. This kind of perception is really nothing like the seeing of extended objects with shapes and sizes. Vision and touch are space-oriented senses, but not so hearing, taste, and smell. Spatial ingredients are here contingent and adventitious. So, the Kant-Strawson thesis looks implausible as applied to all (actual and conceivable) senses. It is quite easy to “make intelligible to ourselves” the possibility of non-spatial experience; we have such experiences all the time, if not in unadulterated form. The core of the experience is space-neutral, space-oblivious. Size and shape are irrelevant, unrepresented.

A more difficult question concerns whether visual and tactile experience is wholly spatial. I will focus on the case of color. Color is certainly experienced as extended: it comes in patches and volumes. But is it a spatial attribute like shape and size? Nothing can be inferred about the space an object takes up from knowledge of its color. If an object is spherical, we can infer that it takes up a spherical quantity of space; but if it is red, we thereby know nothing about its spatial configuration—it doesn’t occupy a “red amount” of space. To be red is not to have a specific spatial attribute, unlike being spherical; it is not itself a spatial determination (to use Kant’s term). This is why it is not studied in geometry: it isn’t a type of figure or form; there are circles and rectangles but not “reds” and “greens” (how many sides do they have?). The concept of angle does not apply to colors. Colors are not modes of extension though they are distributed over extended objects. Perhaps this is not surprising given that colors are projected by the mind: for the mind is not itself an extended geometrical object. Somehow the mind spreads color on objects, but what it spreads is not a mode of space; it’s a bit like sensing a smell from every point of an object’s surface. There is no color already in objective things along with their spatial attributes; it is an imposition from outside. So, the complete spatiality of the objective physical world does not apply to color, which is a subjective contribution. Much the same point could be made about touch and warmth: warmth isn’t an objective spatial attribute but an imposed subjective projection. If this is right, then color and warmth are not themselves spatial features of things that enter into our perceptions of them; so, not every aspect of visual and tactile experience is spatial in nature (also consider brightness). The appearances are not exhausted by their spatial content; they have a different kind of content in addition to the spatial.

It might be said that this does not contradict the spatiality thesis, and that is perfectly correct. The thesis never maintained that only spatial content constitutes the appearances, just that it occurs in every experience. Also, color doesn’t crop up universally in sense experience, unlike space (allegedly). But it does allow us to formulate a new thesis that complicates the picture: we can say that visual experience necessarily incorporates both spatial and non-spatial content, given that color is essential to visual experience. It is true that color doesn’t occur in all sense experience, as space is alleged to, but it does occur in all visual experience, so it is a necessary visual universal. Accordingly, space is not as exceptional as Kant and Strawson make it sound, especially given that it doesn’t apply to all the senses. There isn’t a sharp opposition between space and other attributes of the kind alleged by the spatiality thesis; there is just what is more common and less common. It isn’t that space is the very “form of sensibility” while color is mere local variation with no necessity of its own. Space has no especially unique status among perceived qualities. Different aspects of experiential content are useful to the perceiver as ways of representing the world for various biological reasons, space being one of them; but space is not the real metaphysical essence of experience, the sovereign sine qua non. For some creatures, smell and sound might be the chief engines of survival, with space a distant second (living in the dark will not favor vision).

Strawson sometimes weakens the spatiality thesis to say only that an analogue of space is a necessary feature of all experience. This is very vague and open to accusations of vacuity, but it is a wise move on his part. It is too intellectualist to accommodate simple perceivers, and the emphasis on spatial concepts as constitutive of sensory content adds to that fault. Even when the thesis is restricted to human perceivers it gets things wrong because of statistically unusual humans—infants, the congenitally blind, those with certain sorts of brain damage. Perception is a lot more flexible and multifarious than some philosophers have allowed—a lot more independent of Euclid, Newton, and the Kantian Categories. How we think of the world in our abstract scientific theories is not the best guide to the way animals perceive it in their daily lives. To be sure, it is useful to perceive spatial relations, but many other things are also useful to perceive; and we don’t perceive space in the manner of a metaphysician. Perception is not Newtonian.[1]

[1] Was Kant so enamored of Newton’s physics that he wanted absolute Newtonian space to inhabit the human soul? Wouldn’t this bring the soul closer to God (infinite, absolute, immaculate)? Smell and taste don’t seem this elevated. We might think of infinite absolute space as God in nature, infiltrating the soul of man. So Kant may have dreamed. Here theology, physics, and metaphysics meet.

Share

Bounds of Sense

Bounds of Sense

Quine once described Strawson as applying his “limpid vernacular” to the technicalities of logic (in a review of Strawson’s Introduction to Logical Theory). One might hope that he would do the same in exegesis of Kant in The Bounds of Sense. However, in that work we are treated to such tortuous locutions as “necessary conditions of the possibility of any experience of objective reality such as we can render intelligible to ourselves” (119): why not simply “necessary conditions of experience” and “intelligible”? We seek the necessary conditions of experience (adding “the possibility of” is redundant) and we want to know what kinds of experience are intelligible (whether we can “render” them “intelligible to ourselves” is another question, depending on our powers of self-directed persuasion). Further, what is meant by “experience” here? Strawson regularly alternates this word with “empirical knowledge”, including scientific knowledge; but these are different things, one perceptual, the other conceptual-propositional-cognitive (as in knowledge of scientific theories). It seems clear that he is mainly thinking of visual experience as delivered by adult human eyes, but is generalizing beyond that domain. Does he wish to include emotional experience, or olfactory, or ethical, or imaginative, or experience of pain and pleasure? The doctrine of the necessity of spatiotemporal content is clearly more convincing for the visual sense than these other types of experience (especially the spatial component), so we shouldn’t be lulled into accepting a perfectly general thesis based on one instance of it. And what kind of spatial content is deemed essential to experience as such—extension, ordering, continuity, dimensionality, unity, objectivity, infinity? Some experience might be weakly spatial (smell and taste) while other experience is more strongly spatial (looking into the distance on a bright clear day). Imaginative experience prescinds from space considerably, dispensing with spatial relations to other objects. Emotions have little to do with space compared to normal binocular vision. And some forms of vision are more spatially rich than other forms—think of the etiolated visual experience of closed eyes in the dark. The question is a lot messier than Strawson allows, much less clearly defined. Could we ask the same question equally of sensation, perception, sentience, appearance, seeming, consciousness? Might we not get different answers depending on what term we choose? The term “experience” is vague and general, so we don’t know quite what Strawson (channeling Kant) is considering. Is memory included—and what kind of memory? Is mathematical reasoning included? What about logical “experience”? We need more limpid vernacular to tie the question down, more ordinary language philosophy.

About one thing Strawson is crystal clear: Kant thinks that reality itself is not spatiotemporal and Strawson himself rejects that claim. The phenomenal world is deemed spatiotemporal in its essence, but the noumenal world is non-spatiotemporal in its essence, according to Kant. This doctrine is hard to take seriously, as Strawson indicates. How could Kant know this given that (as he thinks) we have no access to the nature of the reality that exists outside our minds? How can we use our sense experience to navigate the objective world if their essences are so different—doesn’t there have to be at least some kind of correlation? Why would we be designed (by God or nature) to represent reality so faultily? What possible reason could be given for removing things in themselves from space and time? The idea that space and time are “in us” but only in us is unmotivated, bizarre, and preposterous; and certainly not required by the Kantian apparatus of phenomenal space and time (intuitions, sensibility, the understanding, the categories, etc.). It may be that phenomenal space and noumenal space are not the same (Euclidian and non-Euclidian, say), but there has to be someveridical relation between them; denying this is gratuitous and disastrous. I would say that concrete, causal, law-governed reality is clearly spatiotemporal, necessarily so—we can make nothing else “intelligible to ourselves”. That is indeed why sense experience is spatiotemporally imbued (to the extent that it is): this is just a scientific fact, a fact of biology and evolution, of the body and brain. Animals experience the world spatiotemporally because that is the real nature of the world in which they have to survive.

So, we can say, lamely but limpidly, that objective reality is necessarily spatiotemporal and that sense experience is variously and to some degree spatiotemporal. There is no simple binary opposition here: animal sentience is spatiotemporal in many ways and to different degrees (possibly going down to zero). But what about language—meaning, linguistic sense? Curiously, Strawson says little about this in The Bounds of Sense(despite the pre-existence of Individuals). The answer again is mixed and unsystematic, even more so than in the case of sentience. True, we often speak of extended objects in space standing in spatial relations within a unified and ordered spatial manifold (what we call Space). But we also speak of things that are ambiguously and problematically related to space: states of consciousness, numbers, values. Reference is not a purely spatial act. Nor is syntax or grammar best defined in spatial terms. Sounds are not, in themselves and essentially, extended things. Senses are not laid out in space. Language has one foot in space, so to speak, but it also dallies with the non-spatial. Spatial reductionism is a misguided metaphysics. The real is not co-terminus with the extended. It is certainly not a necessary conceptual truth. Language is not, then, subject to Kantian requirements regarding space, even phenomenally. The Kantian project, pushed to extremes, is really an exercise in hyperbole, in which Strawson colludes (as befits an interpreter) but to which he does not wholly succumb. The idea that space is the general form of all our representations is a philosophical exaggeration, like many philosophical theories.[1]

[1] As to time, from the fact that all mental acts occur in time it doesn’t follow that they are of time—that time is an aspect of their content. All events occur in time, but it would be strange to say that they all represent time. It is also misleading to employ the term “spatiotemporal” uncritically: time and space are really very different things, and what holds of time might well not hold of space. Roughly speaking, time is more all-embracing than space. Not everything is space-like and not all mental representation is as of space. Certainly, we cannot derive the necessity of spatial content from the mere existence of the particular-general distinction, as Kant hoped.

Share

Anticipations

Anticipations

Perusing a recent book on the cognitive psychology of number (Number Concepts by Richard Samuels and Eric Snyder), I was put in mind of my psychology M.A. thesis, entitled Empiricism and Nativism in Language and Mathematics, submitted in 1972 to Manchester University (when I was 22). In that thesis I brought together psychology, linguistics, and philosophy, arguing for a nativist position on the acquisition of mathematical knowledge. In particular, I applied Chomsky’s methodology and theoretical framework to the problem of mathematical knowledge acquisition. At the time there was nothing like this in the psychological literature, and I was quite conscious of the fact that I was doing something new and controversial, especially in adopting an interdisciplinary perspective. Indeed, I encountered some resistance to undertaking the project from the more orthodox members of the psychology department (nearly all of them)—what was I doing importing philosophy into psychology? I argued that it was necessary, in order to account for the acquisition of mathematical knowledge, to begin with an adequate analysis of the nature of mathematical truth, as Chomsky had argued that the same procedure was necessary in accounting for the acquisition of language. In effect, we need a metaphysics of number before we can frame theories of how number concepts are acquired—as we need an adequate theory of grammar before we can frame realistic theories of the child’s acquisition of language. We need a theory of the objects of knowledge before developing a theory of knowledge of those objects. Thus, an interdisciplinary perspective was required instead of the application of some general “learning theory”. Anyway, as I say, I was reminded of my thesis by reading a contemporary work in this area of psychology. And then it hit me: I invented cognitive science! I didn’t know it at the time—the term did not even exist back then—but the general outlines of the research program were clearly contained in my thesis. Specifically, the integration of psychology with other disciplines—not just brain science but philosophy of mathematics (along with linguistics). Nothing of my thesis was ever published, though my supervisor Professor John Cohen, made some efforts to interest a publisher (no dice). So, I missed my chance to be hailed as the originator of cognitive science (of course, there were other straws in the wind). My thesis really was a combination of psychology and philosophy, with Chomsky-style linguistics taken as model.

I also read recently Michael Dummett’s book Origins of Analytical Philosophy (1993), which undertakes to compare Frege and Husserl as founders of twentieth century philosophy. Dummett is interested in the fact that these two philosophers had convergent concerns and yet gave rise to divergent schools of thought. This put me in mind of my first published article, entitled “Mach and Husserl”, in the British Journal of Phenomenology(1972). The article was based on my undergraduate dissertation while a psychology student; the editor of the journal, Wolfe Mays, was my teacher and suggested publishing it. In it I compared the two philosophers, noting their clear similarities but divergent offspring. Mach was an early positivist and devotee of “sensations”, while Husserl founded phenomenology and was a devotee of consciousness and its intentional acts. Yet the former gave rise to positivist eliminative behaviorism while the latter spawned existentialism and the centrality of the conscious subject. Dummett says nothing at all about Mach in his book, though Husserl refers to him approvingly. So, it seems that we were both interested in the early days of twentieth century philosophy and Husserl’s role in forming it, and in the divergence that ensued from similar beginnings. I wrote my article over twenty years before Dummett wrote his book and with a very similar aim in mind (except my focus was more on the history of psychology). In a certain sense, then, I anticipated him, though we discussed different personnel. I think, in fact, that Mach was a good deal closer to Husserl than Frege, and arguably had a bigger impact on the course of twentieth century philosophy than Frege (he led to logical positivism). We find no analogue of Husserl’s preoccupation with consciousness in Frege, while Mach was clearly heavily into consciousness. I would say myself that the three of them were the principal architects of twentieth century philosophy, with a little help from Russell and Wittgenstein down the road.

Let me observe that when I applied to Oxford to study philosophy (in 1972), having already written my M.A. thesis and published my Husserl article, it was held against me (by R.M. Hare) that I had done so, these being deemed not fit subjects for a philosophy graduate student at Oxford to be interested in. This was a somewhat narrow and shortsighted decision, if I may be forgiven for saying so—and I was interested in more conventional Oxford-type topics too. After all, I had invented cognitive science and anticipated one of Oxford’s most celebrated philosophers before being admitted to the B.Phil.! Oh well. I did win the John Locke Prize a year or so later, though, so it all worked out in the end I suppose.[1]         

[1] In retrospect it all seems to me pretty hilarious now, though scary. At present I can’t even find my M.A. thesis and I don’t think I have a copy of my 1972 article.

Share

Message from Rebecca Goldstein

Rebecca gave me permission to publish this.

 
One of the bright spots in these bleak days gets delivered to me regularly in Colin McGinn’s blog: brief and beautifully composed philosophical pieces on an astonishingly wide number of topics, many of which, I’m pretty sure, have never before been considered from a philosophical point of view. From the most technically analytic to the most expansively existential, he has something original to say. Colin McGinn is roaming freely in philosophical terrain, and it’s really something to watch.  
Share

The Making of a Philosopher (Part Two)

The Making of a Philosopher (Part Two)

The following is a sequel of sorts to my The Making of a Philosopher (2002). Like that work, this is to be an intellectual memoir, not a marital, medical, musical, or muscular one—a memoir of the mind. It’s about what has gone on in my head.

I originally applied to university to study economics. This seemed like a practical subject, destined to provide employment, and I was already taking an A-level in it (for which I subsequently obtained an A). My strong subjects in school were mathematics and English (not too much memorization), and economics combined the two nicely. I might easily have become a professional economist (I still take an interest in the subject). But I happened to read some Freud and found it fascinating, so I switched to psychology in my applications. This subject too would lead to gainful employment, possibly in the educational field (I had no thoughts of an academic career). I was trying to be sensible, but not bored; after all, it is your whole life we are talking about. This occurred around 1968, a momentous year on the world stage. I therefore studied psychology at Manchester University, obtaining my degree in 1971 (B.A., First Class), followed by an M.A. in psychology in 1972. Philosophy formed a small part of my undergraduate degree: an introductory course on Plato and Sartre and a history and philosophy of science course. I also did some independent reading in philosophy, but nothing like what a student of philosophy might undertake; I was woefully undereducated in that regard. Nevertheless, I ended up studying philosophy at Oxford on the B.Phil. in 1972 (long story, recounted in my aforementioned book). That was a considerable challenge, because everyone else on the course had a substantial (and exceptional) undergraduate education in philosophy, of a kind alien to my own undergraduate acquaintance with the subject (Husserl and Adolph Grunbaum mainly). I had a lot of catching up to do, to put it mildly. I am surprised I came out the other end in one piece.

In 1974 I began my first philosophy job at University College London, after a mere two years of studying philosophy (four years of psychology before that). I didn’t teach philosophy of mind and made no use of my two degrees in psychology (including a good deal of experimental psychology). I mainly taught philosophical logic and philosophy of language (my first lecture course was on truth). I was very conscious of the fact that my philosophical education was patchy, embarrassingly so, and that I had never had the chance to do any serious research in philosophy; I could really have used a couple of years on a JRF or something similar. From then on, I was on the academic treadmill: tutorials, lectures, committees, writing for the journals, book reviewing—the usual routine. I never had much time to immerse myself more widely and deeply in philosophy, though I tried as best I could. And so it continued for the next 38 years! I got through my career, but always going from pillar to post, always rushed, pressured, tired, anxious, barely managing to keep my head above water. I never had the opportunity to just let my mind go where it wanted to go, read whatever I wanted to read, write whatever I felt like writing, think about whatever I liked. I never had that kind of philosophical leisure. I suppose I could say that I had no philosophical freedom. I never had that couple of years to develop my philosophical mind under conditions of unimpeded reflection. I got used to it, but it always grated, rankled, irritated. I imagine it must be much the same for many people: not enough time, not enough energy, too many obligations.

Then I retired (2013). Everything suddenly changed. The pressure was off. The treadmill had been discarded. No more teaching, no more department work, precious few invitations. Each day was a free day. The year ahead was not mapped out by the demands of a university schedule. No more breaking off a train of thought because a lecture had to be delivered the next day. The immediate result was an uptick of energy and concentration: no more teaching fatigue, no more interruptions, no more having to show up for meetings of one kind or another (supervisions, office hours, department meetings, etc.). My time was my own. Let me repeat that, because it’s important: My time was my own. I could do with it whatever my heart desired; I was subject to no temporal demands (Do this! Do that!). I was thus able to immerse myself in philosophical thinking, reading, and writing without external impositions—for the first time in my life (I’m not counting childhood). This produced a qualitative change in my state of mind, my philosophical consciousness, my very existence. I could read all the things I never had time to read, think without distraction for days on end, weeks, months, years. It has been a kind of bliss, foreign to my previous existence, a rebirth of sorts. And not only philosophy: I could read all the literature and science I ever wanted to read, which also contributes to one’s philosophical development. Writing becomes a pleasure not a torment, because there isn’t that nagging feeling that you will have to break off soon in order to fulfill your professional duties. You don’t have to quit in mid-sentence, mid-thought. Can you imagine? Being a professor uses up a lot of energy—have you noticed that?—and this energy could be deployed in other pursuits. To retire is to be reborn (but don’t leave it too late). I also don’t feel that I have to sacrifice other aspects of my life to the academic treadmill, including personal relationships (not to mention sport, music, etc.). Apart from anything else, life becomes a lot more enjoyable.

But the main point I want to make, reporting on my own case (I am still a psychologist, remember), is that in this phase of my life I have achieved a degree of breadth and depth in philosophy that I would never otherwise have achieved. I would even say that I have become over the last ten years a different kind of philosopher. I wish I could characterize this exactly; it has to do with gaining a larger perspective, an ease of thought, a facility of expression (writing philosophy well takes years, decades, of effort). I can just see further.  So, I think of this phase of my mental life as a new philosophical life; I am not the same person philosophically. There was a time when I was a philosophical novice, a time I was an apprentice philosopher, then a time of professional maturity, and now a time not of advanced age or twinkly wisdom but of fresh growth, of new beginnings, of excitement and exhilaration. I could call it creative, but that doesn’t quite hit the nail on the head: it is more a matter of discovery, mastery, arrival. I could almost call a sequel to my old book The Making of a New Philosopher. It isn’t something I ever anticipated.

Of course, there is an irony in all this, a bitter irony one might say, over which I have no desire to dwell. I will put it as abstractly as possible. I am concerned with inner psychology not external circumstances. First, and obviously, there is this blog, the fruit of innumerable hours of quietly intense lucubration. It must be a couple of thousand pages by now. This has been my preferred mode of philosophical expression during this period of personal renaissance—short, to the point, uncluttered, unbound. It is to be noticed that this material has not found its way into print, for several reasons I won’t go into. I feel fortunate that such a method of publication now exists, or else my inner world might not have made it into the outer world. I like what I have written, more so than before. But my inner world has been removed from the outer world of academic philosophy, producing a strange schism in my self-consciousness. It’s not exactly Socrates or Galileo or Russell; it is more a kind of intramural etiolation (here goes the abstraction). We might call it blank-slating, oblique erasure, identity removal. Of course, I still have good friends at the highest levels of philosophical (and other) achievement, whose names I will not mention (you can guess the reasons), so I am by no means cut off from professional contacts; and it’s true that my geographical location increases the degree of professional estrangement. Still, I feel as if nothing I say will ever be received as it once was. And, oddly enough, I don’t much care: my inner world has eclipsed my outer world—that academic carapace the professional professor carries around with him or her has been shed. My inner world has so expanded that it reaches to my subjective horizon. There has been a metamorphosis: I have become a different kind of being, curiously aloof, weirdly autonomous. It is a kind of brimming isolation, supercharged solitude. The banal life of the professional academic has been abolished, to be replaced by a peculiar kind of originality—the reborn corpse, the retired youth, the liberated prisoner. I have a paradoxical duality, the flourishing failure. And I kind of like it. My intellectual world is a world of my own creation with little extraneous intrusion.[1]

[1] I do seek out, and receive, regular feedback from my philosophical friends, so it isn’t that I rely solely on my own judgment. I am not some quivering recluse stewing in his own juices, not a bit of it.

Share

Empiricism, Memory, and Knowledge

Empiricism, Memory, and Knowledge

In pre-Socratic times there was a school of thought known as “memorism” (or so I once dreamt). The principal doctrine of this school was that all knowledge is stored in memory: whenever you know something there was a past event that laid it down in memory, and knowledge is the recall of that something. Past event, storage, recall: these are the necessary and sufficient conditions of knowledge. The memorists opposed the orthodox school of thought (the “revelationists”) which held that all knowledge arises from direct communication with the gods: whenever you know something the gods are conveying it into your mind by speaking directly to you. The revelationists found the memorists impious in their reliance on memory, a human attribute, instead of the divine action of the gods, to whom we owe everything. There is no invocation of the past, no mysterious storing of information, no ecstatic recall experience, just good old-fashioned godly beneficence. Let the gods be praised! The two schools debated the matter at length, never coming to any firm resolution. The revelationists brought forward counterexamples: what about knowledge of the present and future—surely, we don’t know these things by memory? The memorists responded either by denying the existence of such knowledge (eliminative memorism) or by explaining present and future knowledge as special cases of memory knowledge (reductive memorism). Ingeniously, they contended that by the time knowledge is acquired the thing known is past and retained in memory, and that we only know the future by remembering the past (induction etc.). So, it was either memory naturalism or revelation supernaturalism. The memorists were gaining adherents as their anti-supernaturalism spread and flourished; the revelationists seemed mired in superstition and pseudo-explanation. The gods surely had other things to occupy their time, and anyway were not deemed “empirically verifiable”. Memorism seemed to cover the ground nicely, was rooted in everyday experience, and dispensed with ad hoc appeals to divinity. And indisputably, a vast amount of human knowledge simply is stored in memory—knowledge of history, geography, animal husbandry, who your friends and enemies are. The theory looked warranted by the plain facts of human psychology.

But a new school of thought was taking shape at around this time: this school sympathized with the memorist’s anti-supernaturalism but were troubled by apparent counterexamples to the central doctrine. What about our knowledge that everything is self-identical? Is that based on memory? Was there some past event that laid this information down in memory—say, seeing a bunch of self-identical things and making an inductive leap, or hearing it from a trusted teacher? We never seemed to have learned this truth—never made an observation of it or were taught it in school. Yet we knew it. And there is a lot more knowledge like this, as they quickly pointed out: all of geometry and arithmetic, logic, conceptual truths, ethical propositions, maybe even philosophical theories. No past event triggered and justified this type of knowledge; there was no experience of recall in entertaining it; and people never suffered from difficulties of recollection over it (“I know the answer to this, but I can’t quite bring it to mind”). Such knowledge simply doesn’t bear the marks of memory.[1] Has anyone ever said “I just can’t remember whether everything is self-identical or not”? It thus appears that our knowledge exists in two places in our mind: in memory and in some other faculty not itself a type of memory. When asked what this faculty consists in, the anti-memorists grew dark and pensive: for no name suggested itself and the question was obscure. Some declared it an irresoluble mystery, while others (the “eternalists”) boldly asserted that such knowledge exists in the mind eternally (there was no moment of acquisition) and is a primitive fact of human nature. If we want a name for it, we can call it “un-memory” or “pre-memory”—in any case, it isn’t a form of memory in any normal sense. We just have this knowledge; it exists in the deepest recesses of our soul. It was never put there by anyone, divine or mortal, nor was it the result of an interaction with external reality. Remember, these were ancient times and evolution and genetic inheritance were unknown concepts. What this third school (they had no generally accepted name) was sure of was just that not all knowledge is memory knowledge. They opposed the idea that memory knowledge exhausts the whole of human knowledge; their positive theory, however, was still a work in progress. To be sure, much human knowledge is represented in memory, but there remains a substantial core of knowledge that is not so represented. Thus, there are really two types of knowledge; knowledge is not a homogeneous phenomenon. It has two species, two fundamental forms. They resisted the epistemological monism of the memorists.

Does all this ancient intellectual history remind you of anything? Is my dream a reflection of any actual history? Empiricism versus rationalism, of course: memorism is another version of empiricism and anti-memorism is the analogue of rationalism (or nativism). The memorist substitutes memory for experience: instead of saying that all knowledge derives from experience, he says that it is all dependent on memory. He thus sidesteps the standard problems with the concept of experience—whether it is conceptual or non-conceptual, given or interpreted, justificatory or epistemically idle, opaque or transparent—and replaces it with the concept of memory. Interactions with the environment lay down memories, which are later recalled; this is the source of all knowledge worthy of the name. Surely, something like this picture was implicit in traditional empiricism, since experiences had to be retained in memory in order to provide the basis of subsequent knowledge: you see something, remember it, and later recall it in an exercise of knowledge. In short, knowledge is sensory memory, according to empiricism. And rationalism is the denial of that: some knowledge (mathematics, etc.) is not memory knowledge of past interactions with the observable world; it has a different origin and modus operandi (which is hard to specify). Both empiricism and rationalism were opposed to the revelationists of their day; knowledge is not a gift from the gods (or God) but a fact of human natural psychology—an achievement of memory or a product of instinct. My pre-Socratic dream narrative thus mirrors the actual narrative of later philosophy (as Plato’s belief in innate knowledge anticipates Descartes and Leibniz). For some reason, the empiricists didn’t make memory salient, but it was hovering in the background: knowledge is experience remembered. The rationalists, by contrast, thought that not all knowledge consists in experience remembered—remembering past sensory interactions is not a part of knowledge arrived at by pure reason. The question being debated concerns the role of memory in knowledge, not so much the role of experience—whatever that might be exactly. We can even imagine a form of empiricism that eschews the concept of experience altogether, but still insists on the vital role of memory: perhaps there are just physical excitations of the sensory receptors (conscious experiences having been eliminated from the picture), and anyway we want to make room for subliminally acquired empirical knowledge that involves no conscious experience at all. Memory empiricism thus takes precedence over experience empiricism, theoretically speaking.

Putting traditional empiricism aside, the memory formulation affects our view of the distinction between a prioriand a posteriori knowledge. We can now reformulate this distinction in the obvious way: a posteriori knowledge is knowledge based on memory, while a priori knowledge is knowledge not based on memory. This works pretty well: the role of individual and collective memory in the formation of scientific and commonsense knowledge is acknowledged, while its irrelevance to typical instances of a priori knowledge is highlighted (it is not a type of historical knowledge in the broad sense). If anything, this puts a priori knowledge in a better light, because it sounds like pure dogma to insist that all knowledge proceeds from memory—memory is just one way of storing information. The genes store information, in animals and humans, which is then transmitted to offspring (we call the result “instinct”), but such storage is not the faculty normally labeled “memory”. Memory is just one way of possessing information (as well as skills and pre-dispositions); and there is little prima facieplausibility in the thesis that all knowledge (etc.) is contained in acquired memories. Memory is just one method of being informed, equipped, internally configured. And it sounds completely wrong to claim that logical and mathematical knowledge is arrived at by consulting one’s memory, as if it has a basis in historical records; logical and mathematical reasons are not time-bound in that way (“I remember seeing the law of non-contradiction for the first time when I was ten years old, and I have never forgotten it”). You don’t have to ransack your memory to decide if modus ponens is a sound logical rule, nor is there any danger of forgetting it. This kind of knowledge is completely different from knowledge of historical dates, or the route home, or the results of an experiment. Thus, the distinction between a priori and a posteriori knowledge has a firm and clear foundation, which helps establish the sui generis character of the a priori. Strangely enough, the place of memory in relation to the traditional distinction has not been much recognized (if at all).

The empiricist, whether experiential or memorial, puts space and time at the center of knowledge: you can only make pertinent sensory observations at certain times and in certain places. The doctrine might, indeed, be so defined: all knowledge rests on suitable spatiotemporal proximity to the thing known. But the rationalist points to types of knowledge not restricted in this way: we don’t need to be near numbers at a certain time of day in order to know about them (or logical truths or meanings). This type of knowledge is not dependent on spatiotemporal proximity to the thing known—hence the adequacy of the armchair in arriving at such knowledge. Nor is the subject matter naturally conceived as existing in space and time (what is a number such that we could be near it?). Here we find a marked contrast between the two types of knowledge. We really should not expect that a priori knowledge could be subsumed under the a posteriori umbrella. The empiricist is guilty of overgeneralizing from properties of knowledge characteristic of only certain types of knowledge—those dependent on sensory experience or memory. Such knowledge is only so good as the experiences that (allegedly) ground it, or the memory capacities that make it possible; but rational knowledge is free of these kinds of limitations, being neither experiential nor memorial. How it does work, however, is far from clear. All we can say is that considerations of space and time make no difference to the availability of a prioriknowledge.[2]

[1] I discuss this in Inborn Knowledge (2015), 44-46.

[2] I realize that I have been writing and thinking about a priori knowledge for over fifty years, and I never tire of it, difficult though it is. It’s one of the things that got me into philosophy in the first place. I think most discussions of it over the last century have been pretty feeble—exercises in problem avoidance and tendentious stipulation. The nature of a priori knowledge is one of the Big Mysteries of philosophy.

Share

Affective Empiricism

Affective Empiricism

The classic debate between empiricism and rationalism concerning the origins of the human mind focused on the cognitive aspects of the mind.[1] Descartes and Leibniz believed that some knowledge is innate, while Locke thought that all knowledge is acquired through the senses. But there is little to nothing on the affective aspects of the mind: Locke did not insist that emotions are acquired via the senses, and Descartes and Leibniz did not cite the emotions as instances of the rationalist thesis. I think I know why: it was common ground that affective nativism is true and affective empiricism is false. Not nativism about ideas of emotions but nativism about emotions themselves: we are born with these propensities, abilities, traits, dispositions. We may not feel emotions in the womb but our genes contain them in potential form, as they contain our anatomy, physiology, and other characteristics. We don’t learn to feel emotions—by observation of others, imitation, or instruction. In this respect we are like other animals: they too are not an affective tabula rasa. And there are good biological reasons for that: these are traits it is important to have for the sake of survival, so best not left to chance. How, indeed, could such traits be acquired by means of the senses—what might the mechanism be? How could they be “abstracted” from perceived objects? Maybe you could get the idea of the emotion from observing others, but could you get the emotion itself? That would be like acquiring four legs by gazing at quadrupeds. So, there is no real dispute about the origins of emotions: we are born to feel them; we don’t learn from experience to feel them (whatever that might mean). They are written into the DNA. But if that is so, isn’t it a black mark against empiricist thinking about the mind? For, if emotions are agreed to be innate, why shouldn’t “ideas” be—beliefs, knowledge, concepts, perceptual capacities? Why would the mind be hospitable to innate emotions but not to innate cognitions? Why would nativism be half-true? Why must empiricism be true of part of the mind but not of other parts? Whence the dogmatism? Granted, the environment can play a role in shaping and developing the emotions, but the preponderance must be owed to the native constitution of the organism. On this everyone seems to be agreed.[2]

What are the emotions we inherit along with our genes? It is customary to list six basic emotions: anger, fear, disgust, happiness, sadness, surprise. We might want to add lust and sexual passion to this list (also love), but let’s leave it at that. These are the emotions that lurk in our genes just waiting to see the light of day; they are the primitive elements of the periodic affective table. They may be combined into compounds such as despair or helplessness or envy or joy. They are no more learned than pain is: we don’t acquire the ability to feel pain by observing pain in others and somehow internalizing it by abstraction. Emotions are not taught but inherited–universal, spontaneous, part of human nature. But they are not cognition-independent: they have intentionality. We are afraid of things, angry at people, sad about situations; and these objects of emotion are specific—we are not afraid (say) of just anything but of a limited class of things. Prey animals are born being afraid of big cats not butterflies; they don’t learn to be afraid of being eaten by tigers (that would be too late for the learning to have any utility). So, emotions have representational content; and that means that such contents are also innate—antelopes are born knowing what tigers are, as well as being afraid of them. They have, in the old terminology, ideas of tigers that are bound up with their fear of tigers. This means that some ideas have to be innate if emotions are, which contradicts empiricism about ideas; nativism about emotions leads to nativism about emotion-relevant ideas. Not only that; emotions are correlated with a set of expectations about the world—about what kinds of things it contains. It contains things that are scary, angering, disgusting, desirable, happy, sad, or surprising: emotions thus carry with them a “world-view”. And this world-view is innate not acquired by experience, contrary to the empiricist theory of the cognitive mind. If so, what is to prevent other ideas from being innate? To put it simply, animals are born knowing a good deal about the external world just by virtue of being born with feelings about the external world: they come into the world cognitively prepared for it, not mentally blank, not blissfully ignorant. Emotion thus paves the way for a general nativism, and affective nativism is common ground between empiricists and rationalists. Of course, there are many other arguments for the nativist position, but it is instructive that emotions provide yet another argument, and not one easily avoided by the determined empiricist. A sentimentalist in ethics would be committed to nativism about ethical attitudes, since emotions are always fundamentally innate (emotivism thus implies ethical nativism). Emotions can, it is true, be shaped and modified by experience, but the basic repertoire of emotions is original to human psychology. They are instincts not cultural acquisitions. Language is an instinct too, as is perception, and also thought, but emotions are the primal instincts; they have been coded into animal genes for millions of years. Affective nativism is the basic form of nativism.

Some psychologists have claimed that all behavior is learned no matter what might be the case for aspects of the mind. But this position is obviously unstable: if emotions are innate, so must their associated behavioral expression be innate. The prey animal must have an innate predisposition to flee at the sight of a big cat, since fear elicits the flight response (that is the point of the emotion). Flight is like salivation—a reflexive inborn response to a stimulus. So, whole behavior patterns have an innate basis: anti-nativism about behavior is another false dogma of empiricism. The correct position is that nearly all of the mind (including behavior) is innate: emotions, desires, perception, concepts, many beliefs, anything a priori, the psychological faculties (memory, reason, mathematics, ethics, etc.).[3] This is really just biological common sense. All learning, properly so-called, is based on an innate unlearned system; we learn some things only because we don’t learn everything. The tool shed model is more psychologically realistic than the empty cabinet model.

[1] I discuss this debate in Inborn Knowledge (2015). The present paper furthers that discussion.

[2] See Hume’s note 2.9 in his Enquiry Concerning Human Understanding in which he asserts, as against Locke, that self-love, resentment, and sexual passion are all innate, adding that “all impressions are innate”.

[3] I am stating the nativist position very strongly so as to rectify previous empiricist bias; of course, we must allow for some contribution from the environment. The point is that the foundation is innately fixed. What is true of the body is true of the mind: the mind is not originally a blank slate, as the body is not originally a piece of formless stuff. True, the mind has memory which stores acquired information, but the body too bears the marks of experience as it interacts with the world outside of it. The mind is not empty at birth, as the body is not shapeless at birth.

Share