Materialist Idealism

 

 

 

Materialist Idealism

 

 

In Berkeley’s system there are two kinds of entity: ideas and spirits (finite and infinite). What we call “material objects” are not material at all but ideas in minds, our own and God’s. There are no material entities, only ideas and immaterial spirits. God is the basis of all reality and objects only exist as ideas in his mind. At the other extreme, we have a view like that of Hobbes: all entities are material, including human minds and even God. And then there are the dualist views that postulate two sorts of entity, material and immaterial, with ordinary objects as material and minds (human and divine) as immaterial. But one position in logical space does not appear to have been occupied or even contemplated: the position that ordinary (“material”) objects are ideas in minds, ultimately God’s mind, but that minds themselves, including God’s, are material. That is, tables, chairs, mountains, and atoms are “ideal” entities, having no existence outside of minds; but minds are material non-ideal entities—and that includes the mind of God. God is made of matter while so-called material objects are made of mind—they are merely ideas that exist in minds. Thus the objects of perception are all ideal, but the perceiving thing itself is material. Instead of the traditional theological position that God the creator is immaterial while his creation is material, we have the opposite position—that God is material and his creation is immaterial. The universe is immaterial but God its creator is material.

            This position does not seem incoherent; it simply adds materialism about spirits to Berkeley’s general theory. If finite spirits can be material—this is at least a logical option—why can’t the infinite spirit be material too? Why can’t God be made of matter? What is to rule this possibility out? Not God’s supposed infinity: why can’t there be an infinite material substance with infinitely many attributes? It is no more difficult to be infinite if you are material than if you are immaterial. Suppose we define matter as extension-in-motion: then God will consist of an infinite number of extended things engaging in infinite amounts of motion. Or at least he will consist in as much extension-in-motion as is necessary to account for his mighty powers. Only, it seems, a general hostility to grounding mind in matter could stand in the way of regarding God as made of matter (supervenient on matter, etc).  [1] If materialism in some form is coherent for finite minds, why is it not coherent for the infinite mind (whatever exactly this infinity comes to)? Couldn’t the Greek gods be material? Weren’t they?

            Is there any reason to look with favor on this inverted form of metaphysical dualism? There is if you are puzzled about how a mind could exist without any material support. If you find it hard to accept that finite minds (human and animal) could exist without a foundation in bodies, why is it easy to accept that God’s mind could exist without such support? If minds are necessarily ontologically dependent on matter, then the same dependence applies in the case of God’s mind (or the minds of gods). In order for God to have ideas in mind he needs a substrate of matter to keep the whole set-up going—he needs something material for his mind to supervene on. One argument for that position is that in order for his mind to be dynamically active it needs a foundation in motion, since only motion can ground change over time. How can God be the cause of motion if he is not material? Anyone who is suspicious of the notion of self-subsistent mentality will jib at the idea that God’s mind lacks any material being. Objects don’t need matter to sustain their being, since they can be identified with ideas, ala Berkeley; but spirits are not ideas and hence call for another form of sustenance—either as self-subsistent immaterial substances or as products of an underlying material substance. The former view is problematic but the latter appears perfectly coherent. Of course, God did not create matter on this view, since he requires matter in order to exist at all: but he didn’t create himself either according to traditional theology. God depends for his existence on matter, but matter is capable of conferring the powers that he needs (as the material universe before the big bang had the power to create the universe after the big bang).  [2] So there is no logical obstacle to identifying God with a material substance, so long as we are generous about the extent and powers of matter. And there is nothing to stop us positing non-trivial emergence with respect to the matter that composes God—he need not be reducible to matter.

            Thus there is a metaphysical position here that has not been recognized: materialist idealism. The world of perceived objects consists wholly of ideas in minds, but minds themselves have a material nature—up to and including the mind of God. God is a material thing but his creation is not material (except for the finite minds he has created). Everything is purely mental except minds.

 

Col

  [1] Isn’t it a limitation on God to declare that he has no material attributes? Then his creation would be ontologically richer than he is (in one respect). And is it that he can’t have material attributes as a matter of necessity? But then his omnipotence is limited. Nor do we have to suppose that the substance of God is somehow humdrum if he is made of matter; matter could be a lot more exciting and mysterious than we suppose from our limited perspective.

  [2] If we think that God can perform miracles, then we can add a miraculous element to the matter that forms his substrate. Or else we can just abandon this particular piece of traditional theology.

Share

A Disproof of God’s Existence

                                    A Disproof of God’s Existence

 

 

The traditional definition of God credits him with three attributes: moral perfection, omniscience, and omnipotence. These are supposed to be logically independent, with none entailing the others. But that is not obviously correct: How is moral perfection possible without omniscience and omnipotence? How is it possible to be omnipotent without also being omniscient? Isn’t omniscience a type of omnipotence—a power to see and know everything? In fact, can’t we simply define God in terms of omnipotence, since his other attributes flow from this? If God is omnipotent he must be morally perfect, since he has the power to be morally perfect, and why would he not exercise that power? And if he is omnipotent he must be omniscient, since omniscience is an epistemic power. At the least he has the power to be both morally perfect and all knowing, given that he is all-powerful. Thus omnipotence seems to be basic in the definition of God. God differs from lesser beings precisely in having powers they do not have—moral powers, epistemic powers, and other powers (causing earthquakes, healings, etc). God is replete with power, overflowing with it, by no means lacking in it. Any power there is, he has.

            But is that right? Does God have every power? He has the power to create and destroy universes, but does he have the power to sneeze or digest food or pick his nose? Those powers require possession of a body with a certain anatomy, but God has no such body, being disembodied. Does he have the power to decay or split or emit radiation? How could he have these powers given his immaterial nature? Does he have the power to come down with a cold or be bed-ridden or have the runs? Surely not: God has the powers that are proper to his divine nature, not any old powers that things of other natures have—animals, plants, atoms. God essentially lacks certain powers as a condition of being who he is. He has the powers of a god not of a worm or cactus plant. Everything must lack something in order to be something, i.e. to have a determinate nature.

            Does God have the moral powers of Satan or of a petty human sinner? Does he have the power to feel pleasure at the suffering of an innocent child? Does he have the power to relish the demotion of an office rival? Does he have the power to long for the death of an enemy? No: God has the power to feel only virtuous emotions and to perform only virtuous actions—he is incapable of petty jealousy or vindictive revenge. It is simply not in God’s nature to be subject to base feelings. Even to be capable of such feelings is alien to God’s nature. He exists beyond base emotions, being pure through and through. Certainly it would not make him more godlike to be capable of the lowest human failings. So it is wrong to say that God is by definition all-powerful; he is only powerful within the limits of his nature. With respect to the powers he has by that nature, he is limitlessly powerful, but he does not have every power that everything in the world has—for that he would have to be the world. But God stands apart from the world, having a different nature from that of the world; he is a being unto himself.

            If we want God to be literally all-powerful, we will end up with a Spinozistic pantheism, which is tantamount to the denial of God’s existence as traditionally conceived. But if we choose to restrict the powers that God has, then we can no longer define him as all-powerful. There cannot be a god that has all powers (and to the maximum degree): for such a god would not be a god but a strange hybrid of the mortal and the divine—a being of mixed nature, neither one thing nor the other. A sneezing, digesting, nose-picking god is no god. Nor can it be that God merely has the potential to do these things while never actually doing them: for first, to have even the potential is already to place God in the wrong ontological category; and second, if he were to exercise these powers that would immediately deprive him of his godlike status—he would become at best a god-human hybrid (like Jesus). If God were to pick his nose one day, he would thereby cease to be God.  [1] So having that power is no part of his nature.    

              The difficulty for God is to specify what kind of omnipotence he is supposed to possess. And the dilemma is obvious: either he has powers that do not properly belong to his nature as divine, or he lacks powers that other things possess, thus being less than all-powerful. The concept of an all-powerful being is actually, when you think about it, incoherent. To be a thing of a certain type is necessarily to have a limited range of powers, because powers and natures go hand in hand.

 

  [1] We should distinguish actually having certain powers from the ability to transform oneself into an entity with certain powers. Maybe God has the ability to transform himself into a worm at will, but that doesn’t imply that he now has the powers of a worm. And if he did so transform himself, he would have converted himself into a non-god, because no worm is a god (though a worm might once have been a God).

Share

Induction About Induction

 

 

Induction about Induction

 

 

When we reason about the future we use induction. The sun has always been observed to rise, so we infer inductively that it will rise in the future. The skeptic questions the validity of such inferences. But can’t we apply inductive skepticism to induction itself? Is there any justification for believing that we will carry on inferring inductively in the future? True, we have used inductive inference in the past, but does this give us any ground for supposing that we will keep on using it in the future? Just as the sun may not rise tomorrow, so we may not inferthat it will rise. We may cease to use induction as a rule of inference. From the fact that the human mind has used induction in the past it does not follow that it will use induction in the future. Maybe we will start to use counter-induction: instead of inferring that the sun will rise tomorrow, based on its behavior in the past, we will use that same behavior to infer that it will not rise tomorrow (yet it still stubbornly rises). This may not be a good inference, but it doesn’t follow that we couldn’t adopt it. Who is to say that we might not become completely irrational tomorrow? The future of the human mind is no more predictable from the past than the future of the physical world is. Hume says that induction is a human instinct, but by his own arguments it might change into another instinct tomorrow.

            We can therefore apply inductive skepticism to inductive inference itself: we have only induction to justify our belief that we will carry on reasoning inductively—the mind might abruptly change its ways. It might be objected that this is wrong because the mind necessarily reasons inductively. The thought here would be that logic is not optional for anything deserving to be called a mind: to be a mind a thing has to obey the rules of logic. Might we stop using deductive logic tomorrow? Some say that a principle of charity implies that all minds must obey the same logic, i.e. accept the same rules of inference—and that includes our future minds as well the minds of our contemporaries. This would have the result that we can know a priori that our minds will always use the same logical principles. And that would show that in one limited area at least inductive skepticism breaks down—we canknow by reason alone (not by fallible induction) that our future minds will resemble our past minds. This would be analogous to showing that we can know that future material objects will always be extended, since it is in the very nature of material objects to be extended (the same is not true of the sun rising). If it is in the very nature of the human mind to use induction (a necessary truth about the human mind), then we can know that future human minds will always use induction.

            But that argument lacks cogency: is it really plausible that human minds necessarily follow standard inductive logic? People can become logical intuitionists, so why can’t they change their commitment to inductive logic, becoming counter-inductivists? All minds may have to use some logic perhaps, but it doesn’t follow that there is some logic they all have to use. Couldn’t we come across people who simply don’t reason inductively (a tribe of fanatical Popperians)? Maybe they would not last long if they disregarded the past as a guide to the future, but that is not to say that they are inconceivable. So our minds might change so as to cease to reason inductively; we can’t rule that possibility out on conceptual grounds. Thus we only have induction to justify our belief that we will carry on using induction. Inductive skepticism applies to induction itself: just as I can’t justify my belief that the sun will rise tomorrow, so I can’t justify my belief that I will think it will rise tomorrow. The sun might not rise and I might not believe that it will rise. So I don’t have a skepticism-proof reason to believe that I will make inductive mistakes in the situation in which the future doesn’t resemble the past. For by then I might have stopped supposing that the past is any guide to the future. I might have abandoned induction. For all I know, I might stop reasoning inductively two minutes from now, becoming a complete agnostic about the future. Induction certainly can’t persuade me otherwise.  [1]

 

  [1] We can construct a grue-type case for induction itself. Let “inductive*” mean “inductive up to time t and agnostic after t”: then what evidence do we have that we are not inductive*, as opposed to inductive? At some future time t we might all become counter-inductive. Human behavior is subject to inductive skepticism, including inductive behavior.

Share

Truth, Goodness, and Beauty

                                   

 

 

 

 Truth, Goodness, and Beauty

 

 

The members of this hallowed platonic trinity are supposed to belong tightly together like a family. What I want to point out is that theories of these three things also form a family; in particular, theories of truth find counterparts in theories of goodness and beauty. There are analogies between the various theories of the members of the platonic trinity.

            In the case of truth we can distinguish five types of theory: the correspondence theory, the coherence theory, the pragmatic theory, the simple property theory, and the deflationary theory. The correspondence theory ventures an analysis of the concept of truth that treats it as a genuine substantive concept, while the other theories are all more or less suspicious of this approach—they resist the idea that truth is a (monadic) property with a hidden structure that can be revealed by analysis. They all suggest that truth is something less than we naively suppose—that it is not as robust as other properties are. This is particularly true of deflationist theories, which deny that truth is a property in good standing. Coherence theories treat truth as mere agreement among beliefs, not as an absolute property that could apply to a belief in isolation. Pragmatic theories regard truth as just a highfalutin way to talk about what is useful in belief or assertion; we should replace truth with utility. Simple property theories reject all attempts at informative definition but seek to retain the ordinary notion of truth, holding that truth is a primitive notion. Four out of the five theories deny that truth is a rugged full-bodied property equipped with a substantial nature.

            How goes it with goodness? We speak as if goodness were a real property of things—states of affairs, actions, and people–so it is a question what sort of property it is. Some theorists attempt to analyze the concept, specifying in what exactly goodness consists—obedience to the will of God, maximum utility, respect for the rights of persons, conformity to the categorical imperative, fidelity to the social contract, and so on. We are offered bi-conditionals of the form “x is good if and only if x is….”, where the blank is filled by a substantive analysis. This is what the property of goodness involves, its underlying constitution, what we implicitly grasp when we employ the concept (as knowledge is true justified belief, water is H2O, etc). We take goodness to be a hearty property in good standing and we try to say what its constituent elements are. We don’t entertain skepticism about the status of the word “good” as a description of the world (even if it is a supernatural world). Of course, there are disputes about what is the correct analysis, but it is accepted that there is an analysis—that the concept is a fit subject for analysis. Goodness is as much a real property as shape.

            By contrast, we have other approaches to goodness that are more or less suspicious of the straightforward approach just described. Thus we have different sorts of relativism that question the absoluteness of goodness: to be good is just to be accepted in a given society or group—it is what most people take to be good. What is good is what people agree to be good, and different groups can disagree on what they agree on. Goodness is a relational property: for something to be good is for it to be endorsed by a group of people who stand to each other in the relation of agreement. Goodness is relative to a society and relations of agreement are what make something good for that society. This is abstractly analogous to the coherence theory of truth: the truth of a proposition is relative to a set of beliefs, and these beliefs must stand in a relation of agreement with one another. If I call something good, I must stand ready to specify the society in which it is deemed good, and I must assume that that society agrees about the goodness of the thing in question; similarly, if I call something true, I must stand ready to specify the set of beliefs in which it is taken to be true, and I must assume that there is agreement between the belief in question and other beliefs.

            Then we have a kind of pragmatic theory of goodness: to be good is just to conduce to pleasure or well-being or preference satisfaction. We could get rid of the concept of goodness and simply talk about what is useful—what satisfies our goals. Goodness is a kind of instrumental or functional property: it is simply whatever leads to human flourishing. There is no “queer” property of goodness inhering in states of affairs; nor is it a matter of what people generally think is good: talk of goodness is just a misleading way to record the fact that some actions and policies get us what we want. What’s good is what works. In fact the pragmatic theory of truth and the pragmatic theory of goodness converge, since truth is also the property of working—getting us to where we want to be in life. Truth is usefulness, but so is goodness; so truth is a kind of goodness, i.e. a route to human wellbeing.

            We also have the simple property theory of goodness, associated with G.E. Moore: goodness is a primitive non-natural property that inheres in things but is not subject to analysis or conceptual breakdown. It is a very special property, one that is logically irreducible yet conceptually sophisticated. Just as truth is supposed by some theorists to be primitive, so goodness is regarded as primitive—it is what it is and no other thing. We have a basic intuitive grasp of these simple properties, rather like our grasp of colors; they form conceptual bedrock. Yet they characterize our highest level of thought and discourse, out of cognitive reach for animals, despite their inner simplicity. We perceive them with our minds (by intellectual intuition) as fundamental constituents of reality—as logically irreducible qualities of things.

            This kind of talk repels another type of philosopher who wants nothing to do with the metaphysics and epistemology of irreducible non-natural qualities. Such a philosopher favors a deflationary and dismissive attitude to goodness: there is just no such property as goodness. When we use the word “good” we are not describing things as having a certain property, whether substantive, relative, instrumental, or primitive: for we are not describing at all. Instead, we are expressing our emotions, performing acts of endorsement, issuing imperatives, or just mouthing off—hence the doctrines of emotivism, prescriptivism, nihilism, and the like. Just as “true” is not a descriptive word but a device we use to express agreement, so “good” is not a descriptive word but a device we use to make our attitudes known. When I say that a statement is true I don’t describe the statement as having some special sort of property; I merely express my agreement with it. And when I call an action good I don’t assign to it a special sort of evaluative property; I merely indicate my approval of the action. If saying “good” is like saying “hurrah!”, then saying “true” is like saying “indeed!”: neither act is logically predicative. In both cases the model of subject and predicate is spurned.

It is less easy to trace these analogies in the case of theories of beauty, probably because the subject has received less analytical attention, but it is not difficult to construct parallel theories of beauty to those we have discerned for truth and goodness. First, we can have substantive analyses of beauty—ranging from what reflects the gods, to the idea of harmonious wholes, to notions of symmetry and proportion. Second, we have relativist theories of beauty analogous to relativist theories of goodness: beauty is what a given society or group findsbeautiful, and this in turn is a matter of convergence of aesthetic response within that community. Beauty is relative, as goodness is, and as truth is (according to the coherence theory of truth). Third, we have pragmatic theories of beauty: there is no property of beauty over and above what has utility, where utility is here understood to involve aesthetic pleasure—what works for the viewer in his or her aesthetic pleasure centers. If perceiving certain objects leads to aesthetic pleasure, that is all that can or should be meant by describing them as beautiful; it doesn’t matter how they are intrinsically or objectively—all that matters is that they promote pleasure. The value of beauty is a purely practical value. Fourth, we can regard beauty as a simple indefinable property: something has this property or not, as a matter of objective fact, and it can be perceived or not. This property is denoted by the word “beautiful”, as “true” denotes a primitive property, and ditto for “goodness”. Beauty consists in nothing but itself, though it may supervene on other properties (such as harmony and proportion); it is a fundamental feature of the perceived world, like color or shape, to which we are sensitive. Fifth, there will be some philosophers who recoil at such talk and who would prefer to rid the world of beauty as an objective quality: for them when we use the word “beautiful” we are merely expressing our visual and other preferences, or trying to sound superior, or just mouthing off—we are not talking about anything (except perhaps ourselves). We are not describing or asserting or fact-stating. Our talk of beauty is like our sighs of appreciation when we experience works or art or nature, or the happy humming that can result from hearing music we like.

Truth, goodness, and beauty should all be classified as normative concepts: we ought to believe what is true, we ought to promote what is good, and we ought to take pleasure in what is beautiful. No doubt that is why Plato grouped them together—they encapsulate what we should aspire to, seek, and foster. They constitute a large part of what gives value to human life. But in addition they each invite the kind of theorizing I have described—they each invite a certain pattern of theoretical options. There is no name for this pattern but we might venture, unimaginatively, “the TGB pattern”—the shape in the theoretical landscape traced by truth, goodness, and beauty. We can either try to give a substantive theory of these things, or we can lapse into a kind of coherence-based relativism, or we can go pragmatic, or we can insist on irreducibility, or we can abandon the whole model of descriptive predication. If anyone is inclined to hold one of these options for a single member of the trinity, she should ask herself whether she wants to take the same view for the other two members.  [1]

 

Colin McGinn      

  [1] If we could unite the three members under a common concept, we would be able to understand why the theoretical options should be shared. Thus suppose we postulate a genus of which truth, goodness, and beauty count as species—call it “worthiness” or “excellence”. Then there could be substantive analyses of this more abstract property (conformity to the mind of God, some sort of “organic unity”); relativist theories (excellence is what is deemed excellent by a suitable population); pragmatic theories (excellence is whatever is useful in obtaining our goals); simple property theories (excellence is an indefinable property that we have to accept as primitive); and deflationist theories (talk of excellence is just a way to vent one’s positive feelings or demonstrate solidarity). But this attempt at unification, though theoretically appealing, looks contrived; so we must rest content with three separate concepts that invite similar theoretical responses. 

Share

Unity and Variety in Language

                                               

 

 

 

Unity and Variety in Language

 

 

The idea of linguistic universals should not seem surprising if we consider that language is the expression or externalization of thought. Given that thought contains universals, and language expresses thought, language should contain universals. If there is an innately (genetically) determined language of thought, that language will contain universals, so learned public languages will naturally reflect the universals contained in the language of thought. Thought has the kind of biological universality found in perception and memory (as well as emotion and desire)—an innately fixed cognitive system. Thus thought always (in humans) involves a finite base of concepts and rules of combination, unbounded creativity of available thoughts, apparatus for identification and ascription, generality and specificity, logical entailment and reasoning, and basic ontological categories (space, time, object, cause, etc). The idea that investigating thinking subjects in alien cultures would reveal the complete absence of such features is both antecedently implausible and empirically unfounded—as it would be to expect massive variation for perception, memory, and emotion. And if spoken language is just the external expression of thought, reflecting its constitutive features, then we can expect that language will display corresponding features—as human languages clearly do. This is especially obvious if we accept that language is essentially a vehicle of thought—a symbolic system designed to aid thinking. If we think with language, then language will need to possess the shape of thought. Linguistic structure recapitulates cognitive structure. If we allow that thought precedes (spoken) language in human history, then language will inherit the properties of the cognitive system that preceded and triggered it. Hence there will be linguistic universals.

             This is the standard picture developed by Chomsky (though I have approached the matter from a somewhat different direction from Chomsky): an innate language faculty, universal to the human species, designed initially as a vehicle of thought. Adding an evolutionary perspective, such a universal innate language system is a useful adaptation, given the utility of language. So we would expect such a useful trait to be installed by natural selection and coded in the genes. It would be inefficient to design an organism that had to learn everything about language from scratch, given that it is useful to everyone; to lack knowledge of language at birth would put one at a disadvantage compared to those born knowing language. It is the same for perception and memory: given their evident universal utility it is better to be born with them than have to acquire them laboriously over time. Or if we consider birdsong and whale language, we would expect innately determined universals common to the species; we wouldn’t expect unlimited local variation and a prolonged learning period ab initio. We would expect instinct not culture, genes not environment, fixity not variety—just like bodily anatomy. We would expect hardwired brain-circuit universality not plastic individual variability, and that is precisely what we find.

            But this picture raises an awkward question: Why don’t we all speak the same language? If there are innate linguistic universals, why isn’t all of language universal? Wouldn’t it be much simpler to program a single language into the genes so that there was no need to learn a language after birth? Why do the genes make acquiring language so difficult for us? To be sure, children do it relatively quickly and without apparent effort, but why put them to the trouble at all, when it would be so much more convenient to install a single language at birth? Don’t say that it is because there are so many human languages and we don’t want to limit the child to a single language: the question is why there are so many languages to begin with. Why not streamline the whole process so that there is only one language that needs to be mastered? If we make the assumption that human language began in a single language many thousands of years ago, the question is why it diversified into the variety we see today. We don’t see anything comparable in birdsong and whale language—this wild proliferation of languages each unintelligible to anyone other than its native speakers. Wouldn’t any divergence from the original single language be punished by natural selection, given that divergence entails failure of communication? If someone starts speaking a language of his own invention, no one else will understand it, in which case speaking it is a waste of breath; better to stick with the language you know. It would be like a bird suddenly deciding to sing its mating song with a different melody and rhythm—that would not help in the mating game, and such a bird would soon be selected against. Just as no one today thinks it would be a good idea to start speaking an invented alien language to one’s fellow English speakers, so it wasn’t a good idea millennia ago to start spouting an invented language to one’s monolingual associates. The purpose of language (or one of its purposes) is communication, but that presupposes a shared language–so why did variety in languages emerge? Why isn’t every aspect of language like some aspects of language—innate, universal, and fixed? Why so much linguistic variety?

            Here someone might start to waffle on about cultural relativity, the Eskimos, human creativity, chance, historical contingency, complexity theory, and what not. But the question is: What is the use of linguistic variety? What biological purpose does it serve? It seems like a waste of time and energy, and it forces us to abandon the plausible universalism suggested by other aspects of language. Clearly linguistic facts do not need to be learned, local, and variable: so why are some of them this way? Where is the evolutionary payoff? Does it arise from some sort of inevitable propensity to error, like errors in gene copying? Is it that we humans are just incapable or sticking to the one language we have inherited? But birds and whales don’t seem to make such copying errors, botching the language of their ancestors. Is it that in the distant past some dunce did the equivalent of pronouncing “book” as “livre” and the error caught on? We need to find some benefit that attaches to linguistic variety, not just a regrettable tendency to get things wrong. Why can it be good not to speak the language of other people? Why would the genes favor someone with the ability to speak in a way that is unintelligible to others? Why can it be an advantage not to communicate?

            The answer I want to suggest is secrecy. The reason distinct human languages exist—the reason they evolved—is that it was advantageous to keep secrets from other people. If two people are plotting against a third person, it is helpful to speak in a language that third person cannot understand. Or if you are a member of closely-knit tribe and are planning with others to engineer a power grab, it pays to speak a language impenetrable to others; it solves the problem of eavesdroppers. You can whisper or meet in secret too, but this is risky if people suspect that something is afoot–better to speak in a new language. Similarly, if you know where there is a good supply of food but you don’t want other members of your tribe to know about it, you might devise a language limited to your family to talk about such matters. Thus linguistic diversity arises from group secretiveness. It arises from facts of social psychology—facts with a clear rationale and evolutionary payoff. It isn’t just a matter of serendipity or human fallibility or novelty for its own sake. Each new language is a kind of secret code in the war of man against man. To put it differently, languages vary because of the fear of being found out. And if the secretiveness is mutual—they are plotting against you, as you are plotting against them—your languages will grow apart even more. There is even selective pressure to make the new language as impenetrable as possible to outsiders so as to prevent code breaking. If someone were incapable genetically of such linguistic novelty, they would be at a disadvantage compared to more linguistically versatile group members; they would be unable to take part so effectively in plots that require secrecy. It is useful to be able to speak a “private language”. So you need linguistic flexibility not fixity. If whales were always out to deceive each other and engage in inter-group warfare (from mild to life-threatening), we might expect some variety in whale language comparable to what we find in human languages; but the social psychology of whales is not like that. We humans started speaking in diverse tongues when we began living in large enough groups to encourage keeping secrets from each other (the family is not large enough because of overlap of genetic interests). To put it simply (but misleadingly): English differs from French because the English and the French were always at war.  [1]  More accurately, our ancestors began to diverge linguistically because they needed to keep secrets from each other in conditions of social conflict. No doubt this mechanism began very early in the history of language, long before anything like nations formed. It is evident enough that humans are a sly and deceptive species, prone to faction and feud, and linguistic divergence is one aspect of our propensity to dissemble.

            A consequence of not having a single fixed language common to all human societies is that it becomes necessary to learn each language. In order to be flexible the language faculty cannot be completely fixed innately. So there is a price to be paid for being able potentially to speak multiple languages—the labor of learning. Presumably the price is worth paying: it would be convenient to be born knowing a shared human language (as it might be, everyone is born speaking English), but the problem is that everyone would be stuck with that and so linguistic privacy would be impossible. Someone born knowing only one language and not capable of learning any other would be at a disadvantage compared to more adaptable speakers. We may imagine an original position in which people can speak only one language, and this language might be innately known (like birds and whales), but speakers who could branch out into inventing other languages had the advantage over less flexible speakers in the area of secrecy. They can speak in a secret code that baffles others. There is no comparable advantage to being able to perceive or think differently from other people, since these activities cannot be monitored by observers. But when it comes to speaking, observers can tell what you are saying if you share a language, so secrecy will require something that disguises the message (not so for hidden inner thoughts). Espionage requires camouflage. And don’t we see vestiges of this in children’s behavior today—as they contrive to keep their communications hidden from adults by using a “private language”? Sub-groups will often invent a dialect that is closed to outsiders, so that they can keep things to themselves. Slang achieves this purpose, as does jargon. We want to be intelligible to some but not to all—we want to be unintelligible to certain individuals, particularly our enemies. A specially constructed language can serve to make our message invisible to all but the select few (hence secret codes in war time). True, this language has to be acquired, which has its costs, but it is worth it considering that a universal language would be far too transparent for comfort. If human beings had no desire to plot and deceive—if they had no political secrets—they could happily speak a single universal language; but plotting and deceiving is the human way, like it or not. Hence we speak many languages and have to learn each of them (the part that distinguishes languages, not their universal features). By contrast, we do not think with distinct sets of concepts, analogous to different languages, and the language of thought is universal. Linguistic variety is really an anomaly biologically, but it has its biological rationale. It operates, in effect, as a form of camouflage.

            It is often said that the purpose of language is communication. That is not incorrect, except if it neglects the function of language as a vehicle of thought. But we should add that the function of human languages, in all their variety, is also to enable lack of communication—to exclude others from understanding everything one says, to camouflage one’s thoughts. What we seek is selective communication, including some and excluding others. Language makes thought public, but we often want our thoughts to be unknown to others—because we desire privacy. A language faculty that revealed our thoughts to any attentive listener would not serve all of our purposes. It would make us too transparent. We need to conceal as well as to reveal. Linguistic universals are useful, but linguistic particularity is also useful. Hence humans speak languages that are both unified and diversified—reflective of thought but also capable of concealing thought.

            If these suggestions are on the right lines, we can now say not just that human languages are both unitary and diverse but why they have both these characteristics. They are unitary because of their intimate connection to thought, which itself is unified; and they are diverse because of the social requirement of secrecy, which requires selective communication. If language were used for nothing except as an instrument of thought, we might expect it to be invariant across the human species, as well as innate and unalterable. But its social role in selective communication ensures that languages vary.  [2]

 

Colin McGinn  

  [1] Of course, I intend this as pithy exposition not literal historical fact: the French and English were speaking different languages well before they went to war with each other. Let me also add that many other factors no doubt contribute to the diversity of human languages; I am simply trying to get at the biological roots of the phenomenon.

  [2] It isn’t easy to think of other aspects of human culture that display this kind of deception-based diversity, language being our primary mode of communication, but we can cite the prevalence of culture-bound gestures such as varying handshakes and hand signals. Such gestures can serve the social purpose of inclusion and exclusion, and they often trade upon indecipherability to outsiders.

Share

Knowledge of One’s Visual Field

                                   

 

 

Knowledge of One’s Visual Field

 

 

The human visual field is limited and known by us to be limited. It is possible to establish its extent and shape by the strategic placement of stimuli: about 180 degrees and roundish (corresponding to the optics of the eye). We normally don’t pay much attention to it, but we can easily be brought to recognize that the visual field is limited and offers a roughly circular portal to the world—you just need to move your hand up and down and from side to side in front of your eyes to appreciate the limited extent of your visual field. But how exactly do you come to know that your field of vision is thus limited? Can you see those limits? Clearly not: you can’t see the border of your visual field—you have no sense datum of that border, no impression of it, no perception of its contours. The limits of your visual field lie outside your visual field. In order to see these limits you would have to see what lies beyond them, which is impossible. You can see the limits of things within your visual field, as well as their shape, as when you see a circular object; but the circularity of your visual field itself is not something that is visible to you. There is no round stimulus that your eye is responding to. The shape and limits of the visual field are not objects in the visual field; they condition what you see but they are not seen. So you don’t know the geometrical properties of your visual field by seeing these properties; it has an unseen geometry. Accordingly, your knowledge of your visual field’s geometry is not based on perception of it—as your knowledge of things within your visual field is based on perception of those things. This is not perception-based knowledge: your eyes are not responding to the extent and shape of the visual field. It is not as if the boundaries of the visual field are marked by a bright blue line that you can see. This is a kind of vanishing phantom boundary, though real enough anatomically and phenomenally.           

            How else might it be known? Could it be known by introspection? That seems quite wrong: you can introspect your visual experiences, but you can’t introspect the point at which they come to an end–you can’t introspect their disappearance. The limits of the visual field are no more introspectible than they are visible. Maybe you can introspect the experience of a limit, but not the limit of an experience. Your introspective faculty is no more responsive to the limits of the visual field than your visual faculty, even if it is responsive to the experiences that the visual faculty serves up. What would it even be to introspect the border between visual experience and its absence? Have you ever tried to do such a thing? How would you set about it?

            Do you know it by inference? Surely not: you are not aware of the limits of your visual field by inferring these limits from the fact that you see some things but not others. It is not that you know you can see what’s in front of you but not behind you, and you then form the hypothesis that your visual field must be limited. You can actually experience the limits by suitable placement of stimuli; it isn’t like inferring that you have an unconscious from what is evident to consciousness. It is not a scientific theory, capable of being disconfirmed. It is a phenomenological fact, though not one that appears as an ingredient in experience—like color or shape or texture. It is not listed as among the primary and secondary qualities, simply because it is not a perceptible quality of anything; it transcends the perceptible (or the introspectible). Yet it is something that we can know immediately—it is part of our lived awareness. It isn’t a matter of guesswork or speculation, like the limits of objective space. It’s a datum—but not a datum of sense. We sense it but we don’t sense it.

            So we don’t know the shape and limits of the visual field either by perception or by introspection or by inference (and certainly not a priori, like mathematics). But we do have this knowledge. Therefore not all of human knowledge is based on these methods of knowing. How we do have this knowledge is not an easy question, but it is clear that we don’t have it in any of the traditionally recognized ways. Thus traditional epistemology is incomplete. It is a fact of our sensory psychology that the visual field has certain limits, and we are aware of this fact, but we don’t know it by any of the standard methods. Is it empirical or a posteriori? Those categories seem too crude to capture this kind of knowledge, based as they are on sensory knowledge of familiar types—such as seeing what lies within the visual field. But knowledge of the visual field is sui generis—not sensory exactly, and not introspective either (or “intuitive”). It is a bit like our knowledge of our own existence in not fitting into the traditional categories, but it is not to be assimilated to that kind of knowledge either. Familiar as it is, it fails to conform to standard epistemology. It demands its own specific epistemology.

 

Share

Bacteria Epistemology

                                   

 

 

 

 

 

Bacteria Epistemology

 

 

Epistemology has been shaped around the case of adult humans, as if these were the sole epistemic agents. Thus we find invoked such concepts as belief, judgment, reason, reflection, rationality, inference, ratiocination, justification, doubt, certainty, fallibility, and so on. But what about human children, infants, fetuses—don’t they also know things? How should we characterize their epistemic life? And what about the innumerable other species that have a claim to being epistemic agents—isn’t knowledge an extremely widespread phenomenon in the biological world? Think of the cognitive feats of birds and bees as they navigate and congregate. If you jib at applying the concept knowledge to these creatures—not that you should—at least accept that there is a wider natural kind here that needs to be acknowledged. You can use the word “cognize” to denote this wider natural kind if you like, but don’t try to deny that epistemology applies across a very wide domain; yet we philosophers seldom venture into non-human epistemology. I propose to take it to the limit and consider the epistemology of bacteria, which bears a striking resemblance in certain respects to human epistemology—but without the intellectualist concepts listed at the beginning. In bacteria we find proto-epistemology.

            Bacteria have receptors on their surface, which are sensitive to the surrounding chemistry of their environment. They also have a motor system (the flagella) that enables them to move toward or away from concentrations of chemicals, depending on whether the sensed chemicals are nutritious or toxic. They can navigate the gradient of chemical concentrations, enabling them to home in on food and avoid toxins. So they have a primitive sensory-motor system. Moreover, they can keep a record of earlier chemical concentrations that enables them to move in the right direction, given the gradient (if it was less concentrated earlier than now, they are moving toward the source not away from it). Let’s give ourselves permission to say that bacteria sense, act, remember, and know: not in the way higher animals do, to be sure, but in primitive form (though adequate for their needs). Then what I want to suggest is that bacteria epistemology has some of the central features of human epistemology—despite the lack of belief, reflection, rationality, and so on. That is, the structure of bacteria epistemology mirrors that of human epistemology (and the same applies to all other species in between).

            Suppose a bacteria B knows that there is food in a certain direction by sensing the chemicals diffusing toward it. There is food there, B’s receptors are activated, and the proximate chemical environment indicates food. Can we analyze B’s knowledge by these conditions? It might seem as though the conditions are both necessary and sufficient, but consider the following odd case. Food is present and B senses chemicals at its surface that normally indicate the nearby presence of food, but in fact it is pure chance that this is so: a freak lightning bolt has disturbed the chemical environment in such a way that the sensed chemicals don’t actually come from the food source but from the impact of the lightning. B’s receptors have registered the chemicals and those chemicals usually emanate from a nearby food source, but in this odd case these two things have been severed—it is just an accident that the food happens to be where B’s sensors indicate it to be. The impinging chemicals derive from the lightning not the food in this unusual set-up. If B had a human psychology, we would say that it has a true justified belief, but that this belief is only accidentally true, and hence does not count as knowledge. But B doesn’t have a human psychology, though it can (as we agreed) possess knowledge, and in this case the intuitive upshot is the same—Bdoesn’t know there is food nearby. What we have is a Gettier case for bacteria. B’s evidence for the food is not caused by the food, though there is food there and the evidence is generally reliable. B has the analogue of an accidentally true belief. So even at this primitive cognitive level Gettier cases can arise, despite the lack of the humanlike apparatus of belief and justification. There are analogues of that apparatus, but it is not required that the genuine article be present in order for Gettier cases to be possible. This is what I meant by saying that the same epistemological structure applies across the board. And given that this structure applies, we should be less reluctant to employ the concept of knowledge quite broadly: bacteria have knowledge because they are subject to Gettier cases. Such cases are a sign that the concept of knowledge can be applied broadly. Bacteria detect external facts by means of the traces left by those facts (concentrations of chemicals), so it is possible for there to be cases in which the detection corresponds to the facts and is based on reliable evidence but in which there is no knowledge.

            A second structural feature relates to skepticism. Sense experience is not logically sufficient for corresponding external facts, so we have the problem of skepticism concerning knowledge of the external world; but likewise the chemical concentrations around bacteria are not logically sufficient for the presence of the distal facts they are taken to indicate. The bacteria could reflect (were they capable of reflection) that they might actually be “bacteria in a membrane”, that is, the chemical interactions at their surface might be occurring without anything corresponding to them in the wider world. They thus have a skeptical problem analogous to ours: scientists might be producing a simulated bacteria world. The same is true for animals with more advanced senses and sentience: the octopus could coherently wonder whether its beliefs about the external world are really true, given that its inner experiences do not logically imply the truth of those beliefs (maybe it is living in an octopus Matrix). The skeptical problem is not a problem about our knowledge of the external world; it is a problem about every animal’s knowledge of the external world. Russell could have written a book called Animal Knowledge: Its Scope and Limits, with a chapter on bacteria epistemology.  [1]

            There is also the problem of induction: all animals rely on induction to cope with their environment (as Hume himself pointed out), so the problem of induction applies to all species. The inductive beliefs or dispositions of animals might all be false. Impressed by this point, one might apply Popper’s view of knowledge to all species: falsification not verification is the way to organize one’s beliefs. Each species is in the business of developing a theory of the world, even if it is simple and superficial, and Popper’s epistemology recommends replacing verification by falsification. The prediction that the sun will rise every morning is never justified by its past risings, but it has withstood the test of attempts at falsification and so may be tentatively accepted. Whether animal expectation takes the form of full-blown belief or just habits of behavior is not central to epistemology; the structure is there even when the psychological elements vary. That structure basically consists of receptivity to evidence combined with commitment to facts that go beyond the evidence; but the structure can have different psychological realizations in different species. There are epistemological universals that exist across species with different kinds of mind, even down to bacteria. Species can vary in what facts are known and in the faculties by which they are known, but what is common is the abstract structure just sketched. Epistemology really began on earth some 4 billion years ago with bacteria, and it has been evolving ever since, with humans at the end of long chain. It is parochial to assume that epistemology begins and ends with humans. Gettier problems, skepticism, and the problem of induction have been around since the earliest origins of life, even if unrecognized; we are just recent examples of these ancient problems.

 

C

  [1] He did write a book called Human Knowledge: Its Scope and Limits. Such anthropocentrism!

Share

Induction Reconfigured

                                               

 

 

Induction Reconfigured

 

 

Some critics of induction have charged that it is simply a logical fallacy to reason inductively. If induction is inferring a general conclusion from particular cases, then the conclusion patently doesn’t follow from the premises. How can we validly infer that all swans are white from the premise that some swans are white? Clearly, the fact that some swans are white is compatible with some swans being black, so how can we infer that all swans are white merely from observing some white ones? Statistically speaking, we are moving from propositions about a sample to propositions about the population that contains the sample—and the sample may be small and biased relative to the population. In fact, in all cases in which we are particularly confident in our inductions the sample is tinycompared to the population: for example, the number of times we have seen the sun rise is minute compared to the number of times that it will (we think) rise. Any law of nature will apply to extremely large populations of objects or events (in the billions of billions) and yet we have observed only a small fraction of these cases; but we don’t seem to be deterred by the statistical boldness of our inference. What if our limited sample is highly unrepresentative of the totality of cases? The critic insists that it is simply irrational to infer such a vast generalization from a small number of positive instances. You can’t derive all from some! Nor does it help to limit inductive inference to judgments of probability: the critic will insist that not even probability follows from premises about particular cases, especially when the sample size is so small. Even if we can infer probabilities from our samples, we vastly overestimate those probabilities: we think it is close to certain that the sun will rise tomorrow and thereafter, and yet we have only observed a tiny proportion of the sun’s behavior, past and future. Our inductive reasoning is clearly defective, the critic maintains, given the nature of the premises we use and the conclusions we draw.

            It is hard to deny that there is something amiss about moving from some to all in the way we are represented as doing. We recognize that in many cases such a move is wildly inappropriate: just because some coins are copper it doesn’t remotely follow that all coins are copper. We are well aware that samples can be unrepresentative, so why do we seem to forget this in the cases where we confidently reason inductively? The answer I want to suggest is that the reasoning that is involved is not properly represented by the standard formulations: we don’t in fact try to derive all from some. The reasoning works differently: there is an intermediate premise from which the conclusion does logically follow. We don’t think, “x is F and y is F and z is F; therefore, everything is F”, which is certainly vulnerable to the critic’s complaint. We think something subtler, namely: “The things we have observed have a certain nature that ensures that they are F; therefore, everything of this type is F”. First let us focus on the relationship between the nature and the prediction: given that it is in the nature of the things we have observed to be F, anything of this type will be F. For instance, given that it is in the nature of material bodies to exert gravitational force, all material bodies will exert gravitational force. This is not an inference from some to all but an inference from nature to consequence: if it is in the nature of certain things to have a particular property, then all things of that type will exhibit that property. So if the proposition about nature is one of our premises, we are not drawing a conclusion that fails to follow from the premises. If it is in the nature of kind K to be F, then necessarily any instance of K will be F. Being F is a consequence of being K.

The question then is whether the premise itself follows from observations of particular instances. And to this I reply that we do not suppose that it does: we don’t think that truths about instances imply truths about natures and properties. Rather, this is a hypothesis we bring to bear on the instances: we think that it explains the regularities we have observed. We generally suppose that nature works by means of natures and we hypothesize that this is what is going on in a particular case—but we don’t think the hypothesis follows from the observations. So we are not making any fallacious inference. We can accept that other conceptions of nature may conceivably be true—there are no natures in nature and everything we observe is just a giant coincidence—but we believe that our general hypothesis is more likely. Why we believe that is another question (it might just be a matter of metaphysical faith); the point is that it is not arrived at by inductive inference in the classic style. It is not an inference from some to all: it is a high-level hypothesis about how the world works. No doubt the hypothesis is vulnerable to skeptical doubt, because it is difficult to rule out the alternatives, but it is not just a flagrant non sequitur, like the move from some to all. We have skepticism about the external world and other minds too, but it is not charged that our habitual beliefs in these areas are based on a logical fallacy. That is the charge I am anxious to rebut. Skepticism we can live with, but a non sequitur of numbing grossness is not something we can tolerate. The point is that our so-called inductive reasoning does not operate according to the model classically proposed: it is not an unmediated leap from some to all; rather, it ventures a hypothesis and then makes a logically valid inference from that hypothesis.

            Nor is that hypothesis inherently absurd or logically questionable. Nature is full of things with natures, and their properties flow from these natures. This is what natural laws depend on. We view nature in this way as a kind of antecedent commitment; we don’t infer it from a limited sample of regularities. If we did, we would be trying to derive powers from mere regularities. But that is not how our reasoning works: we bring the concept of power toobjects; we don’t derive it from objects. As Hume taught us, we can’t observe powers in objects. We commit no non sequitur; we just make an assumption. At no point are we inferring from the fact that some things are F that all things are F, even when reasoning “inductively”. We don’t induce the general from the particular. Our reasoning is subtler: we impute natures and powers and then we logically derive general conclusions. The conclusion does logically follow from the premises, since if it is in the nature of Ks to be F then all Ks will be F (near enough). If it is in the nature of swans to be white, they will all be (naturally) white; and if it is in the nature of material bodies to attract other material bodies, they will all do so. Of course, it may not be in the nature of these things to have those properties, in which case our projections will turn out to be mistaken: we are certainly not infallible in our inductive reasoning, and so skepticism can gain a foothold. But at no point have we made the numbingly gross inference from some to all. In a sense, then, we don’t use induction (“enumerative induction”) when reasoning “inductively”.  [1]

            This explains why the accumulation of positive instances does not increase our inductive confidence—because it was never based on the frequency of such instances. We are no more confident today that the sun will always rise than our ancestors were, though we have witnessed more confirming instances; and we don’t need to observe more effects of gravity in order to strengthen our conviction that gravity will continue to operate. The reason is that just a few instances can elicit from us the hypothesis of natures and powers, and from that we can derive a generalization. We are not adding up the instances and calculating that the conclusion is more probable the more instances we have; rather, we postulate powers and then derive the general conclusion they support. It is not that we believe that bread nourishes (will always nourish) simply because we have observed that it has nourished many times in the past; we believe it because we postulate a nourishing nature (the power to nourish) and conclude that bread will continue to nourish in virtue of that nature. We could make this postulation based on a single instance (and often do make such postulations); we don’t believe it based on observed frequencies. Indeed, we recognize that observed frequencies don’t entail the power in question, since bread may not nourish and yet be accidentally correlated with things that do (such as the air inside it). What we never do is reason that since bread has nourished on a number of past occasions it must nourish forever—any more than we infer that all coins must be copper because some are. We need the premise that it is in the nature of bread to nourish or for coins to be copper (which in this case it is not). Again, we can be wrong about such things, thus inviting skepticism, but we are not wrong because we have made an illogical leap from a meager sample to a large population. We have not supposed that facts about some things can establish facts about all things.

            Both the supporter of induction and the critic of induction make a common assumption, namely that our reasoning about the future and the unobserved is based on inferring general conclusions from premises concerning particulars. One side says that we can infer unlimited general propositions from propositions about limited samples; the other side denies that such inferences are legitimate. But the common assumption is mistaken: we just don’t reason in that simple way. Insofar as anyone does, they are indulging in questionable ratiocinative practices. There is no need to reason in that dubious way; we can provide an alternative reconstruction of the nature of the reasoning involved. Perhaps we should stop speaking of “induction” or “inductive reasoning”, because it suggests the all-from-some model of the reasoning at issue. Instead, we might speak of “projection” or “projective reasoning”, thus staying neutral about the precise nature of the inference. What is called “the problem of induction”—the problem of justifying moving from a limited set of specific facts to an unlimited general fact—does not apply to the kind of reasoning I have sketched, since no such move is made in that kind of reasoning.

The classic problem of induction arises from empiricist assumptions about where our knowledge must come from, namely from sensory observation and that alone.  [2] This is what leads to the idea that there is a logical problem about induction, since observation of a finite number of cases falls woefully short of the kind of conclusion we seek to derive from such observation. But we don’t reason in this empiricist way (unless we are convinced empiricists); we inject a much richer set of assumptions into our reasoning about the natural world (natures, powers). Empiricism leads to the idea that projective knowledge depends on enumerative induction from positive instances, but this kind of reasoning is a highly questionable procedure; better to abandon such reasoning and the empiricism that goes with it. There is no problem of induction in the classic sense because there is no induction in that sense.

 

  [1] I hope I have made it abundantly clear that my aim is not to answer the skeptical problem of justifying induction—there is plenty of room for skeptical doubt in the account of induction I favor. Induction really is vulnerable to skeptical challenge, like our beliefs in the external world and other minds. My point is just that induction is not based on the grotesque unmediated leap from some to all.

  [2] Suppose we grant that a certain rationalist assumption is known to be true, namely that the world consists of objects with natures that necessitate their properties and effects (maybe this assumption is implanted in us by God at birth): then we will not have a problem of induction of the classic type, since that assumption enables us to make inferences that avoid any dubious move from some to all. We get a serious problem of induction only when we seek to move from the observation of a small set of regularities to a conclusion concerning a more exensive population.

Share