The Beginning of Modern Science

I expect a terrible rebuke from one of my adversaries, and I can almost hear him shouting in my ears that it is one thing to deal with matters physically and quite another to do so mathematically, and that geometers should stick to their fantasies, and not get involved in philosophical matters where the conclusions are different from those in mathematics. As if truth could ever be more than one.

Galileo Galilei, "Discourse on Floating Bodies," 1612


E pur si muove -- And yet it moves.

Galileo Galilei, sotto voce after his trial and coerced confession.

One thing that happened during the Renaissance that was of great importance for the later character of modern philosophy was the birth of modern science. This may not have been a coincidence. It is noteworthy that the confidence of Johannes Kepler in the mathematical nature of the universe was Platonic in inspiration, derived from the revival of Plato by Renaissance scholars and ultimately from the Platonism of Mistra in Romania. It is thus reasonable to think that this enabled Kepler and Galileo to break through Aristotelian conceptions of induction and found the new, modern mathematical physics.

Even as in the Middle Ages philosophy was often thought of as the "handmaiden of theology," modern philosophers have often thought of their discipline as little more than the "handmaiden of science." Even for those who haven't thought that, the shadow of science, its spectacular success and its influence on modern life and history, has been hard to ignore.

For a long time, philosophers as diverse as David Hume, Karl Marx, and Edmund Husserl have seen the value of their work in the claim that they were making philosophy "scientific." Those claims should have ended with Immanuel Kant (1724-1804), who for the first time clearly provided a distinction between the issues that science could deal with and those that it couldn't, but since Kant's theory could not be demonstrated the same way as a scientific theory, the spell of science, even if it is only through pseudo-science, continues.

The word "science" itself is simply the Latin word for knowledge:  scientia. Until the 1840's what we now call science was "natural philosophy," so that even Isaac Newton's great book on motion and gravity, published in 1687, was The Mathematical Principles of Natural Philosophy (Principia Mathematica Philosophiae Naturalis). Newton was, to himself and his contemporaries, a "philosopher." In a letter to the English chemist Joseph Priestley written in 1800, Thomas Jefferson lists the "sciences" that interest him as, "botany, chemistry, zoology, anatomy, surgery, medicine, natural philosophy [this probably means physics], agriculture, mathematics, astronomy, geography, politics, commerce, history, ethics, law, arts, fine arts." The list begins on familiar enough terms, but we hardly think of history, ethics, or the fine arts as "sciences" any more. Jefferson simply uses the term to mean "disciplines of knowledge."

Something new was happening in natural philosophy, however, and it was called the nova scientia, the "new" knowledge. It began with Mikolaj Kopernik (1473-1543), whose Polish name was Latinized to Nicolaus Copernicus. To ancient and mediaeval astronomers the only acceptable theory about the universe came to be that of geocentrism, that the Earth is the center of the universe, with the sun, moon, planets, and stars moving around it. But astronomers needed to explain a couple of things:  why Mercury and Venus never moved very far away from the sun -- they are only visible a short time after sunset or before sunrise -- and why Mars, Jupiter, and Saturn sometimes stop and move backwards for a while (retrograde motion) before resuming their forward motion. Believing that the heavens were perfect, everyone wanted motion there to be regular, uniform, and circular. The system of explaining the motion of the heavenly bodies using uniform and circular orbits was perfected by Κλαύδιος Πτολεμαῖος, Claudius Ptolemy, who lived in Egypt probably during the reign of the Emperor Marcus Aurelius (161-180). His book, still known as the Almagest, based on its Arabic title, , al-Majistî (in turn based on Greek τὸ Μέγιστον, "The Greatest"), explains that the planets are fixed to small circular orbits (epicycles) which themselves are fixed to the main orbits. With the epicycles moving one way and the main orbits the other, the right combination of orbits and speeds can reproduce the motion of the planets as we see them. The only problem is that the system is complicated. It takes something like 27 orbits and epicycles to explain the motion of five planets, the sun, and the moon. This is called the Ptolemaic system of astronomy.

Copernicus noticed that it would make things a lot simpler (Ockham's Razor) if the sun were the center of motion rather than the earth. The peculiarities of Mercury and Venus, not explained by Ptolemy, now are explained by the circumstance that the entire orbits of Mercury and Venus are inside the Earth's orbit. They cannot get around behind the Earth to be seen in the night sky. The motion of Mars and the other planets is explained by the circumstance that the inner planets move faster than the outer ones. Mars does not move backwards; it is simply overtaken and passed by the Earth, which makes it look, against the background, as though Mars is moving backwards. Similarly, although it looks like the stars move once around the Earth every day, Copernicus figured that it was just the Earth that was spinning, not the stars. This was the Copernican Revolution.

Now this all seems obvious. But in Copernicus's day the weight of the evidence was against him. The only evidence he had was that his system was simpler. Against him was the prevailing theory of motion. Mediaeval physics believed that motion was caused by an "impetus." Things are naturally at rest. An impetus makes something move; but then it runs out, leaving the object to slow down and stop. Something that continues moving therefore has to keep being pushed, and pushing is something you can feel. (This was even an argument for the existence of God, since something very big -- like God -- had to be pushing to keep the heavens going.) So if the Earth is moving, why don't we feel it? Copernicus could not answer that question. Neither was there an obvious way out of what was actually a brilliant prediction:  If the stars did not move, then they could be different distances from the earth; and as the earth moved in its orbit, the nearer stars should appear to move back and forth against more distant stars. This is called "stellar parallax," but unfortunately stellar parallax is so small that it was not observed until 1838. So, at the time, supporters of Copernicus could only contend, lamely, that the stars must all be so distant that their parallax could not be detected. Yeah, sure. In fact, the absence of parallax had been used since the Greeks as more evidence that the Earth was not moving.

It is common now in many venues for people to say that heliocentric astronomy was rejected by the Greeks and ignored in the Middle Ages just because of the human arrogance that wanted the Earth to be the center of the universe -- we belong in the center of things. There were certainly some people who thought that way, but it is hard to imagine that all Greeks, or all Mediaevals, were so foolish. They weren't. The little morality tale we are given of Mediaeval ignorance and anthropocentrism overlooks the problem that there was no evidence of heliocentrism in Ancient or Mediaeval science, that Copernicus himself did not supply any evidence, and that it was the Ancient and Mediaeval understanding of the physics that was dead against the Earth moving. Usually these treatments don't even mention the physics. The only evidence that Stephen Hawking mentions against Ptolemaic astronomy (in his A Brief History of Time) at the end of the Middle Ages is that the Moon, moving on an epicycle, would move away from and towards us in a way that would dramatically change its apparent size. Unfortunately, Copernicus retained an epicycle for the motions of the Moon, which means that this problem with Ptolemaic astronomy is equally a problem for Copernican astronomy. Only Johannes Kepler (1571-1630) would fix things by replacing epicycles with elliptical orbits. That Copernicus supplied no compelling evidence for this theory led Thomas Kuhn to think that Copernicanism won out only because of social, not evidentiary, factors. But then Copernicanism did not triumph until Galileo, and the evidentiary situation with Galileo was much different than it had been with Copernicus [note].

Copernicus was also worried about getting in trouble with the Church. The Protestant Reformation had started in 1517, and the Catholic Church was not in any mood to have any more of its doctrines, even about astronomy, questioned. So Copernicus did not let his book be published until he lay dying.

The answers, the evidence, and the trouble for Copernicus's system came with Galileo Galilei (1564-1642). Galileo is important and famous for three things:

  1. Most importantly he applied mathematics to motion. This was the real beginning of modern science. There is no math in Aristotle's Physics. There is nothing but math in modern physics books. Galileo made the change. It is inconceivable now that science could be done any other way. Aristotle had said, simply based on reason, that if one object is heavier than another, it will fall faster. Galileo tried that out (though it had already been done by John Philoponus in the 6th century) and discovered that Aristotle was wrong. Aerodynamics aside, everything falls at the same rate. But then Galileo determined what that rate was by rolling balls down an inclined plane (not by dropping them off the Leaning Tower of Pisa, which is the legend). This required him to distinguish between velocity (e.g. meters per second) and acceleration (change in velocity, e.g. meters per second per second). Gravity produced an acceleration -- 9.8 meters per second per second. Instantly Galileo had an answer for Copernicus:  simple velocity is not felt, only acceleration is (although, as Einstein would say, not all acceleration). So the earth can be moving without our feeling it. Also, velocity does not change until a force changes it. That is the idea of inertia, which then replaced the old idea of an impetus. All this theory was ultimately perfected by Isaac Newton (1642-1727).

    Galileo's conception of inertia needed perfecting because it still retained Mediaeval elements. For instance, Galileo said, "all external impediments removed, a heavy body on a spherical surface concentric with the earth will maintain itself in that state in which it has been; if placed in movement towards the west (for example), it will maintain itself in that movement." There are at least two problems with this statement. The "spherical surface concentric with the earth" means the "spheres" and spherical orbits of Ptolemy and, as it happens, Copernicus himself. Galileo did not accept the discovery of Kepler that the orbits of planets are ellipses, and so his support of Copernicus was to the letter, including the retention of some epicycles. Converting Ptolemaic spheres into a form of inertia meant introducing the concept of "circular inertia," that a body moving on a circular path will continue on that path unless "disturbed." This is a unique conception, intermediate between impetus (where there was "circular impetus") and Newtonian intertia, where the latter as a velocity has a vector component, i.e. motion in a constant direction. Galileo thus represents a transition in these matters, much more than may be generally recognized.

  2. With the physical objections to Copernicus's theory answered, the case was completed with positive evidence. Around 1609 it was discovered in the Netherlands that putting two lenses (which had been used since the 13th century as eye glasses) together made distant objects look close. Galileo heard about this and himself produced the first astronomical quality telescope. After selling telescopes to the Republic of Venice, he turned his attention to the sky. He saw several things:  a) the Moon had mountains and valleys. This upset the ancient notion that the heavens, the Moon included, were completely unlike the Earth. b) the Planets all showed disks and were not points of light like stars. c) Jupiter had four moons. This upset the argument, which had been used against Copernicus, that there could only be one center of motion in the universe. Now there were three (the Sun, Earth, and Jupiter). d) There were many more stars in the sky than could be seen with the eye; and the Milky Way, which always was just a glow, was itself composed of stars. And finally e) Venus went through phases like the Moon. That vindicated Copernicus, for in the Ptolemaic system Venus, moving back and forth at the same distance between the Earth and the Sun, would only go from crescent to crescent. It would mostly have its dark side turned to us. With Copernicus, however, Venus goes around on the other side of the Sun and so, in the distance, would show us a small full face. As it comes around the Sun towards the Earth (in the evening sky), we would see it turn into a crescent as the disk grows larger. Those are the phases, from small full to large crescent, that Galileo saw. So that he could claim priority to this discovery, before actually announcing it, Galileo concealed his claim in an anagram that unscrambled to Cynthiae figuras aemulatur mater amorum, "The forms of Cynthia [the moon], the mother of loves imitates." The only argument that could be used against Galileo for all these discoveries was that the telescope must be creating illusions. In fact it was not well understood why a telescope worked. Some people looked at stars and saw two instead of one. That seemed to prove that the telescope was unreliable. Soon it was simply accepted that many stars are double. They still are.

  3. With his evidence and his arguments, Galileo was ready to prove the case for Copernican astronomy. He had the support of the greatest living astronomer, Johannes Kepler, but not the Catholic Church. He had been warned once to watch it, but then a friend of his (Maffeo Barberini) became Pope Urban VIII (1623-1644). The Pope agreed that Galileo could write about both Ptolemaic and Copernican systems, setting out the arguments for each. Galileo wrote A Dialogue on the Two Principal Systems of the World (1632). Unfortunately, the representative of the Ptolemaic system in the dialogue was made to appear foolish, and the Pope thought it was a caricature of himself -- the character had voiced an argument that Urban had personally suggested to Galileo. Urban withdrew his protection. Galileo was led before the Inquisition, "shown the instruments of torture," and invited to recant. He did, but was kept under house arrest for the rest of his life. Nevertheless, it was too late. No serious astronomer could ever be a geocentrist again, and the only discredit fell against the Church. As Galileo left his trial, he is supposed to have muttered, E pur si muove -- "And yet it moves."

    Nevertheless, with access now to the Vatican archives, we discover that Galileo was only "convicted" on the basis of an evidently forged document, which supposedly had instructed him not to discuss the astronomical controversy at all. Since defendants were not allowed to know the evidence against them or, with the assistance of counsel, to cross-examine witnesses or challenge things like this forged document, it becomes evident that the Inquisition could only "convict" Galileo on the basis of fraud. This would not surprise Lord Acton.

The Tomb of Galileo,
Basilica di Santa Croce, Florence, 2019
Some think less of Galileo because he recanted his beliefs, while Socrates was willing to die for his. Well, there has been no more civilized example of a death penalty than when Socrates got to sit around, talk to his friends, calmly drink the hemlock, and lie down to a peaceful death -- the "sweet shafts," the agana belea, of Apollo's silent arrows. Galileo was threatened with torture. No one can be faulted for saying anything under those circumstances.

Indeed, the history of science subsequently often consists of who gets to claim the status of Galilean martyrdom. The interesting cases in our day concern Global Warming and Evolution. The weight of Official Science -- journals like Nature, the National Science Foundation, or the Royal Society of Britain -- is all for Global Warming and for Evolution. The Right complains that multiple equivalents of Galileo are oppressed because they stand up for the heretical truth, that Global Warming and Evolution are frauds. Unfortunately, this confuses very different issues. Evolution is in no danger from any real science; and the Right wastes a great deal of money and effort (such as Ben Stein's movie Expelled) promoting theology and bad metaphysics as some sort of "science." On the other hand, the Global Warming "consensus" is a product of politics, not science. The Right thus plays right into the hands of Al Gore, who is happy to lump "Intelligent Design" and Global Warming skepticism as equally part of an "assault on reason." This is very ironic when a great deal of enthusiasm for the Global Warming cause follows from hostility to science itself, in so far as science and technology represent human progress and the betterment of human life on earth. Thus, between the Earth Liberation Front and the Creationists (not to mention Post-Modernist nihilism), there is little real interest in the modern tradition of science begun by Copernicus and Galileo.

René Descartes (1596-1650) and the Meditations on First Philosophy

The "Sin" of Galileo

Philosophy of Science

History of Philosophy

Home Page

Copyright (c) 1996, 1998, 2006, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2020 Kelley L. Ross, Ph.D. All Rights Reserved

The Beginning of Modern Science, Note

How muddled and distorted this story can get we can see in a recent book, The Age of Global Warming, A History, by Rupert Darwall [Quartet Book, Ltd., 2013]. Darwall has a good grasp of the nature of science, and appropriately invokes Karl Popper; but his grasp of some features of the history of science is confused. Thus, he says:

To explain why planets and stars were in positions they shouldn't according to the Ptolemaic system of the Earth being at the centre of the universe, medieval astronomers added geometrically complex and implausible epicycles. Adding 'epicycles' has come to be synonymous with adopting stratagems to avoid questioning the basic premise of a scientific proposition. [p.99]

Later he refers to "the scandalous state of astronomy before Copernicus" [p.179]. But almost none of these statements is true. The position of the stars had nothing to do with any of the systems, except that the lack of detectable parallax was evidence against heliocentrism. It certainly was never a prediction of Ptolemy that the stars should be elsewhere than they are. I can't imagine where Darwall got such a notion. Nor were the planets elsewhere than where they should be. As Darwall might have known from reading Stephen Hawking, Ptolemaic astronomy was mathematically equivalent to that of Copernicus, or even to that of Kepler. There were no positional problems that were falsifying Ptolemy. Thus, there was no need to fix up the theory; and all the epicycles of Mediaeval astronomy where already present in Ptolemy's completed theory. No Mediaeval astronomer added epicycles to Ptolemy. And if epicycles are inherently "complex and implausible," then the same criticism applies to Copernicus, who retained epicycles for the orbits that Kepler later realized were ellipses. The epicycles, in short, produced the geometrical equivalent of ellipses.

Thus, astronomy was in no "scandalous state" before Copernicus, and there actually was no urgent need to do anything about it at the time. Furthermore, since Copernicus could not resolve the issues about motion, his theory was in no better an evidentiary position than any earlier heliocentric systems. Until Galileo, reasonable persons, informed of the physics, would have judged that the weight of the evidence was against Copernicus. Mr. Darwall must have gotten his screwy ideas from someplace, and I am curious where that could have been -- although it is typical for presentations to ignore the physics and assume (like Christopher Hitchens) that half-wits could have seen through geocentric astronomy on the available evidence. A few years ago, late at night on PBS, I caught some episodes in a history of philosophy series. The lecturer said that Copernicus had "proven mathematically" that the earth goes around the sun. Since he had done nothing of the sort, I was wondering then where such a statement could have come from. All of this seems to be badly understood and badly represented by almost everyone.

Ruper Darwall betrays another confusion about the history of science. Following the "scandalous state" remark, he mentions, "Maxwell's electro-magnetic theories replacing theories of ether in the nineteenth century, which in turn created a paradigm crisis and Einstein's special theory of relativity in 1905" [p.179]. However, Maxwell's theory of electromagnetic radition did not replace "theories of ether" but actually continued them. Ether was the hypothetical medium for electromagnetic waves -- on the sensible metaphysical principle that a wave is the deformation of a medium. It was experiments to detect the ether, which produced the bizarre result that the velocity of light was always the same, that produced the "paradigm crisis" and led to Einstein. But the subject of electromagnetism and the ether is something else where very little of an accurate nature can be found in public discourse.

Return to Text

René Descartes (1596-1650)
and the Meditations on First Philosophy

But it were better, O priests, if the ignorant, unconverted man regarded the body which is composed of the four elements as an Ego, rather than the mind. And why do I say so? Because it is evident, O priests, that this body which is composed of the four elements lasts one year, lasts two years, lasts three years, lasts four years, lasts five years, lasts ten years, lasts twenty years, lasts thirty years, lasts forty years, lasts fifty years, lasts a hundred years, and even more. But that, O priests, which is called mind, intellect, consciousness, keeps up an incessant round by day and by night of perishing as one thing and springing up as another.

Buddhism in Translation, by Henry Clarke Warren, "The Mind Less Permanent than the Body," translated from the Samyutta-Nikâya (xii.62) [Atheneum, New York, 1982, p.151]



Εἶπεν ἄφρων ἐν καρδίᾳ αὐτοῦ· οὐκ ἔστι Θεός.
Dixit insipiens in corde suo non est Deus.
The fool hath said in his heart, There is no God.

Psalms 14:1, Greek text, Psalm 13 in the Septuagint, see here.

Descartes is justly regarded as the Father of Modern Philosophy. This is not because of the positive results of his investigations, which were few, but because of the questions that he raised and problems that he created, problems that have still not been answered to everyone's satisfaction:  particularly the Problem of Knowledge and the Mind-Body Problem. And in a day when philosophy and science were not distinguished from each other, Descartes was a famous physicist and mathematician as well as a philosopher. Descartes' physics was completely overthrown by that of Newton (although French scientists were reluctant to give him up), so we do not much remember him for that.

But Descartes was a great mathematician of enduring importance. He originated analytic geometry, where all of algebra can be given geometrical expression. Like Galileo combining physics and mathematics, this also combined two things that had previously been apart, arithmetic and geometry. The modern world would not be the same without graphs of equations. Rectangular coordinates for graphing are still called Cartesian coordinates (from Descartes' name: des Cartes). Descartes is also the person who began calling the square root of -1 (i.e. -1) the "imaginary" number (i). Descartes lived in an age of great mathematicians, including Marin Mersenne (1588-1648), Pierre Fermat (1601-1665), Blaise Pascal (1623-1662), and Christian Huygens (1629-1695). At a time before scientific journals, Mersenne himself mediated a correspondence between all these people (as well as with Galileo, Thomas Hobbes, and many others). All prime numbers that are powers of 2 minus 1 (i.e. 2n - 1) are still called "Mersenne primes." Huygens then lived long enough to know Isaac Newton (1642-1727).

Apart from being the Father of Modern Philosophy and a great mathematician, Descartes may have been the only philosopher, ever, to have begun his career as a professional solidier. This was an era when most European armies were collections of professional mercenaries, much of whose income, and even basic military supply, came from looting, often with little respect for the property, or even the lives, of civilian populations. Most of the lifetime of Descartes was taken up with the devastation and horrors of the Thirty Years War (1618-1648), whose end he barely outlived. By then, however, he had long retired as a soldier, and he had also settled down to live in the Netherlands, where the censorship and other dangers of the Kingdom of France could be avoided. To be sure, Descartes dedicated the Meditations on First Philosophy to the Faculty of Theology at the University of Paris, but these were people who would not have given him the time of day and can only have viewed his thought with alarm. They would have seen it as a clear and present danger to their own Aristotelian universe, which it certainly was. Similarly, it remains a noteworthy truth of the beginning of Modern Philosophy that nobody was an academic professor of philosophy until Immanuel Kant. The closest anyone in the meantime came was John Locke, who was a fellow in botany and pharmacology at Oxford, but who never obtained a degree, even in medicine, which he nevertheless practiced. It remains to the judgment of history whether contemporary academic philosophy suffers from any of the same evils as the universities in the time of Descartes -- although my judgment is that it does.

Seeing Descartes as a mathematician explains why he was the kind of philosopher that he was. Now it is hard to reconcile Descartes' status as a scientist and the inspiration he derived from Galileo and others with his clear distrust of experience. Isn't science about experience? We might think so, and Descartes gets remembered more as a philosopher and metaphysician than as a scientist or physicist. But the paradox of modern science is its dependence on mathematics. Where does mathematics come from? What makes it true? Many mathematicians will still answer that they are "Platonists," but Plato's views certainly have little to do with experience. So Descartes belongs to this puzzling, mathematical side of science, not to the side concerned with experience. Indeed, there is a fault in modern science, the fallacy of the "Sin of Galileo," where scientists think that the mathematics alone explains everything and that if the math "works," then they don't need to worry about anything else. One might expect the mathematician Descartes to suffer from this problem, but in fact he is sensible of the philosophical issues, the metaphysics and epistemology, that are involved in his thought, while others use mathematics as an excuse to ignore or dismiss philosophical problems.

Meditations on First Philosophy is representative of his thought. "First philosophy" simply means what is done first in philosophy. The most important thing about Descartes as a philosopher is that "first philosophy" changed because of what he did. What stood first in philosophy since Aristotle was metaphysics. Thus the first question for philosophy to answer was about what is real. That decided, everything else could be done. With such an arrangement we can say that philosophy functions with Ontological Priority. In the Meditations we find that questions about knowledge come to the fore. If there are problems about what we can know, then we may not even be able to know what is real. But if questions about knowledge must be settled first, then this establishes Epistemological Priority for philosophy. Indeed, this leads to the creation of the Theory of Knowledge, Epistemology, as a separate discipline within philosophy for the first time. Previously, knowledge had been treated as falling in the domain of Aristotle's logical works (called, as a whole, the Organon), especially the Posterior Analytics. Modern philosophy has been driven by questions about knowledge. It begins with two principal traditions, Continental Rationalism and British Empiricism. The Rationalists, including Descartes, believed that reason was the fundamental source of knowledge. The Empiricists believed that experience was. Epistemological priority makes possible what has become a very common phenomenon in modern philosophy:  denying that metaphysics is possible at all, or even that metaphysical questions mean anything. That can happen when epistemology draws the limits of knowledge, or the limits of meaning, so tight that metaphysical statements or questions are no longer allowed. [note]

The most important issues get raised in the first three of the six Meditations. In the first meditation Descartes begins to consider what he can know. He applies the special method that he has conceived (about which he had already written the Discourse on Method), known as "methodical doubt." As applied, methodical doubt has two steps:  1) doubt everything that can be doubted, and 2) don't accept anything as known unless it can be established with absolute certainty. Today Descartes is often faulted for requiring certainty of knowledge. But that was no innovation with him:  ever since Plato and Aristotle, knowledge was taken to imply certainty. Anything without certainty would just be opinion, not knowledge. The disenchantment with certainty today has occurred just because it turned out to be so difficult to justify certainty to the rigor that Descartes required. Logically the two parts of methodical doubt are very similar, but in the Meditations they are procedurally different. Doubt does its job in the first meditation. Descartes wonders what he can really know about a piece of matter like a lump of wax. He wonders if he might actually be dreaming instead of sitting by the fireplace. Ultimately he wonders if the God he has always believed in might actually be a malevolent Demon capable of using his omnipotence to deceive us even about our own thoughts or our own existence. Thus, there is nothing in all his experience and knowledge that Descartes cannot call into doubt. The junk of history, all the things he ever thought he had known, gets swept away.

Ever since the Meditations, Descartes' Deceiving Demon has tended to strike people as a funny or absurd idea. Nevertheless, something far deeper and more significant is going on in the first meditation than we might think. It is a problem about the relation of causality to knowledge. The relation of cause to effect had been of interest since Aristotle. There was something odd about it. Given knowledge of a cause (and of the laws of nature), we usually can predict what the effect will be. Touch the hot stove, and you'll get burned. Step off a roof, and you'll fall. But given the effect, it is much more difficult to reason backwards to the cause. The arson squad shows up to investigate the cause of a fire, but that is not an easy task:  many things could have caused the fire, and it is always possible that they might not be able to figure out at all what the cause was. The problem is that the relation between cause and effect is not symmetrical. Given a cause, there will be one effect. But given an effect, there could have been many causes able to produce the same effect. And even if we can't predict the effect from the cause, we can always wait around to see what it is. But if we can't determine the cause from the effect, time forever conceals it from us. This feature of causality made for some uneasiness in mediaeval Western, and even in Indian, philosophy. Many people tried to argue that the effect was contained in the cause, or the cause in the effect -- and we get a misrepresentation of the problem in the Sherlock Holmes stories. None of that worked, or even made much sense [note].

With Descartes, this uneasiness about causality becomes a terror in relation to knowledge:  for, in perception, what is the relation of the objects of knowledge to our knowledge of them? Cause to effect. Thus what we possess, our perceptions, are the effects of external causes; and in thinking that we know external objects, we are reasoning backwards from effect to cause. Trouble. Why couldn't our perceptions have been caused by something else? Indeed, in ordinary life we know that they can be. There are hallucinations. Hallucinations can be caused by a lot of things:  fever, insanity, sensory deprivation, drugs, trauma, etc. Descartes' Deceiving Demon is more outlandish, but it employs the same principle, and touches the same raw nerve. That raw nerve is now known as the Problem of Knowledge:  How can we have knowledge through perception of external objects? There is no consensus on how to solve this even today. The worst thing is not that there haven't been credible solutions proposed, there have been, but that the solutions should explain why perception is so obvious in ordinary life. Philosophical explanations are usually anything but obvious; but no sensible person, not even Descartes, really doubts that external objects are there. This is why modern philosophy became so centered on questions about knowledge:  it is the Curse of Descartes [note].

In his own discussion, Descartes does not identify his problem as resulting from the asymmetry of cause and effect as applied to knowledge. However, this is what underlies his difficulty, and an explicit statement of the matter does not have long to wait. In 1690, Bishop Pierre-Daniel Huet, a member of the French Academy, wrote that any event can have an infinite number of possible causes. Huet was certainly aware of Descartes' work (as any Frenchman by then would have been), and certainly took his epistemological difficulties seriously. Indeed, Huet's book was a celebration of epistemological difficulties, entitled a Philosophical Treatise on the Weaknesses of the Human Mind. Since the relation between cause and effect is not as such a cognitive relation, C.S. Lewis realized that Determinism, in which the only relations between objects are causal ones, eliminates the ability of Determininists to account for the truth of their own theory. This is a sound insight; and although it was famously disputed by Elizabeth Anscombe, it is really no more than a corollary of the Problem of Knowledge. Determinism can account for knowledge, even of its own theory, no more than can Descartes when faced with the nature of perception.

In the second meditation, Descartes wants to begin building up knowledge from the wreckage of the first meditation. This means starting from nothing. Such an idea of building up knowledge from nothing is called Foundationalism and is one of the mistakes that Descartes makes. Descartes does not and cannot simply start from nothing -- just as Newton admitted that he stood "on the shoulders of giants" to make the progress that he did. Nevertheless, Descartes gets off to a pretty good start:  he decides that he cannot be deceived about his own existence, because if he didn't exist, he wouldn't be around to worry about it. If he didn't exist, he wouldn't be thinking; so if he is thinking, he must exist. This is usually stated in Latin:  Cogito ergo sum, "I think therefore I am." That might be the most famous statement in the history of philosophy, although it does not seem to occur in that form in the Meditations.

But there is more to it than just Descartes' argument for his own existence. Thinking comes first, and for Descartes that is a real priority. The title of the second meditation actually says, "the mind is better known than the body," and the cogito ergo sum makes Descartes believe, not just that he has proven his existence, but that he has proven his existence as a thinking substance, a mind, leaving the body as some foreign thing to worry about later. That does not really follow, but Descartes clearly thinks that it does and consequently doesn't otherwise provide any special separate proof for the existence of the soul. In the end Descartes will believe that there are two fundamental substances in the world, souls and matter. The essence of soul for him, the attribute that makes a soul what is it, is thinking. The essence of matter for him (given to us in the fifth meditation), the attribute that makes matter what is it, is extension, i.e. that matter takes up space. This is known as Cartesian Dualism, that there are two kinds of things. It is something else that people have thought funny or absurd since Descartes. The great difficulty with it was always how souls and their bodies, made of matter, interact or communicate with one another. In Descartes' own physics, forces are transferred by contact; but the soul, which is unextended and so has no surface (only matter has extension), cannot contact the body because there is no surface to press with. The body cannot even hold the soul within it, since the soul has nothing to press upon to carry it along with the body (a problem that was pointed out to me by a student during my lecture). Problems like this occur whenever the body and soul are regarded as fundamentally different kinds of realities.

Today it might seem easy to say that the body and soul communicate by passing energy back and forth, which doesn't require contact, or even proximity; but the presence of real energy in the soul would make it detectable in the laboratory:  any kind of energy produces some heat (towards which all energy migrates as it becomes more random, i.e. as energy obeys the laws of the conservation of energy and of entropy), and heat or the radiation it produces (all heat produces electromagnetic radiation) can be detected. But, usually, a theory of the soul wants it to be some kind of thing that cannot be detected in a laboratory -- in great measure because souls have not been detected in a laboratory.

One of the gravest oversights in Descartes' theory is that he fails to notice that his metaphysics precludes the humble phenomenon of sleep. The Cartesian soul is essentially a "thinking substance," which means that it cannot stop thinking, any more than matter can stop being extended. Perhaps the first philosopher to note that this created a difficulty was John Locke, who observed, "'Tis doubted whether I thought all last night, or no..." Indeed. Thought is not going to happen in the unconsciousness of deep sleep, a condition already recognized in the Upanishads. The unconsciousness of sleep tears a large hole in the confidence of Descartes that "the mind is better known than the body," but there are other things he also overlooks. None of our memories are immediately present to us in consciousness. The memory of Descartes himself contained the large vocabulary of the French and Latin languages alone, thousands of items. Yet we can quickly access our memories and bring them to consciousness. Or sometimes we can only do this with difficulty, and sometimes we forget things altogether. These were already features of Plato's own theory of knowledge, and so Descartes, with his own Classical education, cannot be excused for perhaps being ignorant of it. He just forgot. So whole phenomena of Mind, such as sleep and memory, or of unconscious or preconscious mental contents, are off the table of Cartesian philosophy. This is a fault.

Nevertheless, Descartes' problem of the soul is not just a confusion or a superstition. Our existence really does seem different from the inside than from the outside. From the inside there is consciousness, experience, colors, music, memories, etc. From the outside there is just the brain:  gray goo. How do those two go together? That is the enduring question from Descartes:  The Mind-Body Problem. As with the Problem of Knowledge, there is no consensus on a satisfactory answer. To ignore consciousness, as happens in Behaviorism, or to dismiss consciousness as something that is merely a transient state of the material brain, is a kind of reductionism -- i.e. to say the one thing is just a state or function of another even though they may seem fundamentally different and there may be no good reason why we should regard that one thing as more real and the other less so.

Much of the talk about the Mind-Body Problem in the 20th century has been reductionistic and behavioristic, starting with Gilbert Ryle's Concept of Mind, which said that "mind is to body as kick is to leg." A kick certainly doesn't have much reality apart from a leg, but that really doesn't capture the relationship of consciousness to the body or to the brain. When the leg is kicking, we see the leg. But when the brain is "minding," we don't see the brain, and the body itself is only represented within consciousness. Internally, there is no reason to believe the mind is even in the brain. Aristotle and the Egyptians thought that consciousness was in the heart. In the middle of dreaming or hallucinations, we might not be aware of our bodies at all.

At the end of the second mediation Descartes may reasonably be said to have proven his own existence, but the existence of the body or of any other external objects is left hanging. If nothing further can be proven, then each of us is threatened with the possibility that I am the only thing that exists. This is called solipsism, from Latin solus, "alone" (sole), and ipse, "self." Solipsism is not argued, advocated, or even mentioned by Descartes, but it is associated with him because both he and everyone after him have so much trouble proving that something else does exist.

The third meditation is Descartes' next step in trying to restore the common sense limits of knowledge. Even though he is ultimately aiming to show that external objects and the body exist, he is not able to go at that directly. Instead the third meditation is where Descartes attempts to prove the existence of God. This is surprising, since the existence of objects seems much more obvious than the existence of God; but Descartes, working with his mathematician's frame of mind, thinks that a pure rational proof of something he can't see is better than no proof of something he can.

Descartes' proof for God is not original. It is a kind of argument called the Ontological Argument (named that by Immanuel Kant, 1724-1804). It is called "ontological" because it is based on an idea about the nature of God's existence:  that God is a necessary being, i.e. it is impossible for him not to exist. We and everything else in the universe, on the other hand, are contingent beings; it is possible for us not to exist, and in the past (and possibly in the future) we have indeed not existed. But if God is a necessary being, then there must be something about his nature that necessitates his existence. Reflecting on this, a mediaeval Archbishop of Canterbury, St. Anselm (1093-1109), decided that all we needed to prove the existence of God was the proper definition of God. With such a definition we could understand how God's nature necessitates his existence. The definition Anselm proposed was:  God is that than which no greater can be conceived. The argument then follows:  If we conceive of a non-existing God, we must always ask, "Can something greater than this be conceived?" The answer will clearly be "Yes"; for an existing God would be greater than a non-existing God. Therefore we can only conceive of God as existing; so God exists.

This simple argument has mostly not found general favor. The definitive criticism was given by St. Thomas Aquinas (who otherwise thought that there were many ways to prove the existence of God):  things cannot be "conceived" into existence. Defining a concept is one thing, proving that the thing exists is another. The principle involved is that, "Existence is not a predicate," i.e. existence is not like other attributes or qualities that are included in definitions. Existence is not part of the meaning of anything. Most modern philosophers have agreed with this, but every so often there is an oddball who is captivated by Anselm. Descartes was such an oddball.

Descartes' argument for God is not even as good as Anselm's. It runs something like this:

  1. I have in my mind an idea of perfection.
  2. Degrees of perfection correspond to degrees of reality.
  3. Every idea I have must have been caused by something that is at least as real [in objective reality, what Descartes calls "formal reality"] as what it is that the idea represents [in the subjective reality of my mind, what Descartes confusingly calls "objective reality"].
  4. Therefore, every idea I have must have been caused by something that is at least as perfect as what it is that the idea represents.
  5. Therefore, my idea of perfection must have been caused by the perfect thing.
  6. Therefore, the perfect thing exists.
  7. By definition, the perfect thing is God.
  8. Therefore, God exists.

Here Descartes uses "perfection" instead of Anselm's "greatness." The difficulties with the argument are, first, that the second premise is most questionable. Most Greek philosophers starting with Parmenides would have said that either something exists or it doesn't. "Degrees" of reality is a much later, in fact Neoplatonic, idea. The second problem is that the third premise is convoluted and fishy in the extreme. It means that Descartes is forced into arguing that our idea of infinity must have been caused by an infinite thing, since an infinite thing is more real than us or anything in us. But it seems obvious enough that our idea of infinity is simply the negation of finitude:  the non-finite. Descartes cannot prohibit us from creating new concepts by the simple expedient of negation.

The best that Descartes can ever do in justifying these two premises is argue that he can conceive them "clearly and distinctly" or "by the light of nature." "Clear and distinct ideas," are how Descartes claims something is self-evident, and something is self-evident if we know it to be true just by understanding it's meaning. That is very shaky ground in Descartes' system, for we must always be cautious about things that the Deceiving Demon could deceive us into believing. The only guarantee we have that our clear and distinct ideas are in fact true and reliable is that God would not deceive us about them. But then the existence of God is to be proven just in order that we can prove God reliable. Assuming the reliability of clear and distinct ideas so as to prove that God is reliable, so as to prove that clear and distinct ideas are reliable, makes for a logically circular argument:  we assume what we wish to prove.

Descartes' argument for God violates both logic and his own method. In sweeping away the junk of history through methodical doubt, Descartes wasn't supposed to use anything from the past without justifying it. He is already violating that in the second mediation just by using concepts like "substance" and "essence," which are technical philosophical terms that Descartes has not made up himself. In the third meditation Descartes' use of the history of philosophy explodes out of control:  technical terminology ("formal cause," etc.) flies thick and fast, the argument itself is inspired by Anselm, and the whole process is very far from the foundational program of starting from nothing. All by itself, it looks like a good proof of how philosophy cannot start over from nothing.

With the existence of God, presumably, proven, Descartes wraps things up in the sixth meditation:  if God is the perfect thing, then he would not deceive us. That wouldn't be perfect. On the other hand, when it comes to our perceptions, God has set this all up and given us a very strong sense that all these things that we see are there. So, if God is no deceiver, these things really must be there. Therefore, external objects ("corporeal things") exist. Simple enough, but fatally flawed if the argument for the existence of God is itself defective.

In the fourth and fifth meditations Descartes does some tidying up. In the fourth he worries why there can be falsehood if God is reliable. The answer is that if we stuck to our clear and distinct ideas, there would be no falsehood; but our ambitions leap beyond those limits, so falsehood exists and is our own fault. Descartes does come to believe that all our clear and distinct ideas are innate:  they are packed into the soul on its creation, like a box lunch. Most important is the idea of perfection, or the idea of God, itself, which is then rather like God's hallmark on the soul. Once we notice that idea, then life, the universe, and everything falls into place. Thus, Descartes eventually decides that the existence of God is better known to him than his own existence, even though he was certain about the latter first.

The fifth meditation says it is about the "essence" of material things. That is especially interesting since Descartes supposedly doesn't know yet whether material things existed -- we don't get that until the sixth meditation. It's like, even if they don't exist, he knows what they are. That is Descartes the mathematician speaking. Through mathematics, especially geometry, he knows what matter is like -- extended, etc. He even knows that a vacuum is impossible:  extended space is the same thing as material substance. This is the kind of thing that makes Descartes look very foolish as a scientist. But the important point, again, is not that Descartes is unscientific, but that he chose to rely too heavily on the role of mathematics in the nova scientia that Galileo had recently inaugurated. Others, like Francis Bacon (1561-1626), had relied too heavily on the role of observation in explaining the new knowledge; and Bacon wasn't a scientist, or a mathematician, at all. Descartes was. It really would not be until our own time that some understanding would begin to emerge of the interaction and interdependency between theory and observation, mathematics and experience in modern science. Even now the greatest mathematicians (e.g. Kurt Gödel, 1906-1978) tend to be kinds of Platonists at heart.

The death of Descartes is a lesson for us all. The fame of the philosopher led to an invitation from Queen Christina of Sweden (1632-1654, d.1689), the daughter of Gustavus Adolphus and the last of the House of Vasa. Christina and Descartes clashed in two respects. The Queen was more interested in Greek philosophy than in the philosophy of Descartes himself. Presumably, you ask a philosopher to your Court to discuss philosophy, but what this means is a matter of interpretation; and Descartes was not a generalist or a teacher of philosophy in general. He was not even an academic. At the same time, Christina was a morning person, and she wanted to meet Descartes at 5 AM. In Sweden. In the winter. It's dark and cold. But Descartes was not a morning person and was in fact accustomed to working in bed until noon. So this was not a match made in any kind of heaven. In fact, Descartes took sick and died, although whether this was from pneumonia or something else remains a matter of dispute. But when my parents were trying to get me up to go to school, I wish I had known to retort:  "No! No! It killed Descartes!"

Buried, as a Catholic, in a pauper's cemetery in Protestant Sweden, the remains of Descartes eventually were returned to France (1666) and finally buried at the Abbey of Saint-Germain-des-Prés (1819), as his skull had unaccountably gone missing (which also happened to Goya). Meanwhile, the Pope had put his books on the prohibited Index (1663), and then Louis XIV banned the teaching of his philosophy (1671). This confirms the wisdom of Descartes, like Spinoza, to settle in the Netherlands.

The Linguistic Turn

Solipsism and Radical Skepticism

Philosophy of Science

History of Philosophy

Home Page

Copyright (c) 1996, 1998, 2006, 2007, 2009, 2012, 2013, 2015, 2016 Kelley L. Ross, Ph.D. All Rights Reserved

René Descartes (1596-1650), Note 1;
The Linguistic Turn

...non verbum e verbo sed sensum exprimere de sensu.

...not word for word but to translate sense for sense.

St. Jerome (c.342-420 AD), translator of the Vulgate Bible (c.404)


A split brain patient [the hemispheres of whose brain have been surgically separated] cannot say the word "cat" when it is flashed on a screen that shows the word only to his right [non-linguistic] hemisphere. Yet he can select a cat image from various animal pictures. The right hemisphere, as such experiments showed, "understood" cat even if it could not produce the word as speech.

Sally Satel, "Two Heads Are Better than One," Review of Tales From Both Sides of the Brain, by Michael S. Gazzaniga, The Wall Street Journal, February 24, 2015


...to know a language is to know how to say the same thing in different words. That is precisely what translators seek to do. Roget's wonderful Thesaurus reminds them that in one language as well as between any two, all words are translations of others.

David Bellos, Is That a Fish in Your Ear? Translation and the Meaning of Everything, Farrar, Straus and Giroux, 2011, p.101, Jerome on p.104.

The change from Ontological to Epistemological Priority is also called the "epistemological turn." In the 20th century, some philosophers began to think that they could go one better than this. If we should worry about knowledge before we worry about reality, perhaps we need to worry about language before we worry about knowledge. After all, we do not have knowledge except through the medium of language, and so perhaps the nature of language imposes limitations on knowledge the way the nature of knowledge may impose limitations on our knowledge of reality. This shift becomes known as the "linguistic turn," and it became characteristic of Anglo-American philosophy from the 1930's on, often distinguished as "analytic philosophy" or "linguistic analysis."

While much of academic philosophy was taken up with this trend, the notion that the nature of language determines the nature of knowledge, and generally rules out the existence of metaphysics at all, was particularly characteristic of Logical Positivism and the work of Ludwig Wittgenstein. While Positivism is generally discredited (in part by Wittgenstein himself), much of its spirit, and that of Wittgenstein, continues in further permutations, particularly through the anti-cognitive and nihilistic schools of deconstruction and "post-modernism." The most embarrassing forms of this, however, tend to be found in English rather than in Philosophy departments, and the analytic tradition, although damaged by this history, often maintains a modicum of good sense, as in John Searle. Nevertheless, the "linguistic turn" itself continues to enjoy general respect in philosophy, although it contributed greatly to the sterility and irrelevance of the discipline in the 20th century.

The "linguistic turn," indeed, suffers from a paradox that the epistemological turn did not. To study language, we must know about it. But if it is possible to know about language, then the study of language cannot be prior to confidence in our knowledge -- otherwise we would beg the question of knowledge. To be sure, a philosopher like Hegel thought that something of the sort already discredited the Epistemological Priority of Descartes and Kant. He would be right if the purpose of epistemology was to prove that knowledge exists. It could not do that without supposing first of all that knowledge of whether or not knowledge exists would be possible. Therefore, to Hegel, there was nothing wrong with doing metaphysics first. This is actually an important issue, which is why epistemology cannot be expected to prove that knowledge exists -- Leonard Nelson's point in "The Impossibility of the 'Theory of Knowledge'." With that misunderstanding cleared up, however, there is then no paradox in determining the nature of knowledge by a self-referential study.

When it comes to language, however, we face a more severe "chicken or egg" problem. Which comes first? Does language logically precede knowledge? Or does knowledge logically precede language? If perception is knowledge, and if animals have perception, then clearly knowledge precedes language, since no animal has anything like human language. Did Hellen Keller have knowledge before she learned that the manual alphabet spelling of "water" corresponded to the wet stuff coming out of the faucet? We might have asked her while she was still available, but in principle she needed to have some kind of cognition of that wet stuff in order to make the connection between it and the word that was being spelled in her hand. In fact, there is no doubt not only that animals are cognitive beings, but that the fundaments of human perception and cognition are inherited from animal forebears. Language and rationality are added in the course of human evolution. Sometimes because of brain damage, people are returned as aphasics to some level of pre-linguistic cognition, or, historically, deaf people have often been raised without the benefit of language, when there were simply no other deaf people in their perhaps rural and isolated families and communities. Where the aphasics endure the distruption and frustration of losing language, the deaf may never have known any different. They can adapt to such a life, however impoverished it must seem in comparison to linguistic existence.

Further evidence comes, ironically, from the study of language itself. The early "linguistic" philosophers, especially the Logical Positivists, not only did not know much about languages, but they nevertheless all but despised natural languages because of multiple preconceptions/misconceptions. Thus, when I signed up for a Philosophy of Language class at UCLA in 1970, the professor announced on the first day that the "language" we would be studying was going to be that of mathematics. I might have asked him how one says "Where is the bathroom?" in the "language" of mathematics. In disgust, I never returned to the class. The Positivists thought that mathematics or symbolic logic was real language, while natural languages were some sort of irrational mess that unreformed humanity was temporarily forced to use. In time, they thought, a logically perfect artificial language would be created by logicians to replace the natural ones. We even find notions like this floating around in the science fiction stories of Robert Heinlein.

When that sort of thing got started, the study of languages barely existed as a scientific discipline -- that of linguistics. But there would be a revolution in linguistics, thanks to Noam Chomsky and others, which revealed the power, elegance, beauty, simplicity, and efficiency of natural languages. It also revealed that, if symbolic logic or mathematics -- or computer languages -- could be called "languages" at all, it would only be because they involve a fragment of the structures and uses that are found in natural languages. The Positivists were admiring skeletons, not living languages; and, like the skeletons, philosophy was all but dead in their hands.

In natural languages, various surface structures reflect the processed output of the semantic input of thought. Different languages do this in different ways, and the rules and transformations that make for different surface expressions are studied in linguistics. The "deep structure" -- once a popular phrase in general academia -- was the semantic content, the starting point of meaning, which fed content into the rules that produced the evident surface structues. A basic part of those rules Chomsky believed were innate, the "universal grammar," which in turn structured the particular rules of individual languages -- and which vindicated the epistemology of Continental Rationalists like Descartes. Some linguists, such as Derek Bickerton [Language and Species, University of Chicago Press, 1990], believe that the universal grammar provides a specific default grammar in Creole languages, which grow out of the grammatical confusion of Pidgin languages. Pidgins are languages that form between adult speakers who share a vocabulary but have difficulty learning the grammar of a second language -- especially in an environment with a number of languages, whose grammars differ. Their children grow up speaking Creoles whose grammars contain elements derived from none of the languages spoken by their parents.

An example of what goes on in language, in contrast to thought, would be the inflection of verb systems for time. This may involve tense distinctions -- past, present, future -- aspect distinctions -- perfect or imperfect -- both, or neither. How tense and aspect work is discussed elsewhere. Thus, English and Greek have rather full inflections for both tense and aspect, Russian, Arabic, and Japanese, only inflect for aspect, and a language like Malay has no such inflections. What does this mean? Do Russians, Arabs, and Japanese speakers not care about past or future? Do Malay speakers have no sense of time at all? Of course not -- although people have argued that speakers of languages without temporal inflection therefore don't know or care about time. Instead, grammatical inflections seem to be the result of grammaticalization, by which independent words that express the relevant semantic content (e.g. "tomorrow") become affixed to others and then lose their independence. A striking example of this is how the word pas in French, meaning "step" (Latin passus), has come to mean "not," and is actually replacing the original negative, ne, with which it has hitherto been used in tandem (Je ne veux pas, "I will not"; pas du tout, "not at all"; pourquoi pas?, "Why not?"). Nevertheless, pas also retains its original meaning, as we see in pas de deux, "dance for two," faux pas, "false step, stumble, mistake," and even pas de l'oie, "goose step."

Thus, language does not impose the structure on the world (as linguistic relativists would think) but it absorbs the structure from the words that refer to it. Also, such grammatical structures tend to erode away with time, so that the complex case and gender structure of the nouns in Old English is almost completely gone in Modern English. While such eroding structures have sometimes seemed to people to be the result of decadence, with people becoming stupider, nothing of the sort is happening.

None of this does much good for the "linguistic turn" in philosophy. The study of languages in linguistics is a science, and then the study of science is philosophy of science. Philosophy of science is dependent on general epistemology. All presuppose the existence of knowledge and the relative transparency of language to its use in regard to its objects and to truth. Indeed, the logical extreme and outcome that we see of the "linguistic turn" is of language as an opaque medium without external or objective reference or truth -- as is indeed the case with Wittgenstein's own "language game" theory and the "post-modern" view of language and reality as "socially constructed" artifacts of power relationships in society. This probably is not what the Logical Positivists, who, as Positivists, had respect for nothing apart from science, had in mind.

The truth is that the program of the Positivists was to package their own metaphysical and epistemological prejudices as part of the nature of language. This allowed them to express those prejudices, against metaphysics, religion, or ethics, as accusations that people were "misusing language." This was then a way of avoiding actual ad rem argument in matters of substance. If metaphysics is "literal nonsense," we can just ignore and forget it when others raise metaphysical questions. The convenience of such an approach is not overlooked by "post-modernism," where we find the apotheosis of the ad hominem argument in accusations that someone's racial, class, or gender "subject position" renders any arguments they may have vacuous. Argument need not be taken seriously when dealing with class (or race, or gender) enemies. Incredibly, this sort of thing is often the actual level of discourse in modern academic venues. In those terms, the Positivists and other [pseduo-]"linguistic" philosophers have a lot of vicious irrationality to answer for.

Linguistic Relativism

The Whorfian Hypothesis

Return to Text

René Descartes (1596-1650), Note 2

In Buddhism we get some interesting constructions of cause and effect. The ordinary course of things is that we go "from cause, , to effect, ," which is expressed as , or jûin kôka in Japanese. In religious terms this is understood as proceeding from , abstract "principle," theory, or possibility, to , concrete effects, practice, or results.

But this can be reversed, so that practice comes first and the understanding or principle comes afterwards. Thus, a devotionalistic practice, like chanting the name of the Buddha Amitâbha in Pure Land Buddhism, or the title of the Lotus Sutra in Nichiren Buddhism, can be undertaken without prior study or knowledge, from which such things will consequently follow. This is called , or moving "from effect to cause," jûka kôin in Japanese.

Since Buddhism has rather clear ideas about causality, and Buddhist metaphysics is heavily dependent on the causal principle, the reversal of cause and effect tends to mean simultaneity rather than a true temporal reversal of cause and effect. The strategy is to valorize ritual and devotionalistic practices that otherwise might be belittled, and that are all but dismissed in modern, Protestantizing forms of Buddhism -- although these modern developments long postdate the formulation of this approach.

Return to Text

René Descartes (1596-1650), Note 3;
Solipsism and Radical Skepticism

While imagining the Deceiving Demon of Descartes is great fun, it tends to trivialize the issue at hand. The Deceiving Demon distances and protects us from the radical but serious issue involved in the Problem of Knowledge. Thus, while it seems unlikely that there is a Deceiving Demon, the possibility of hallucinations is something familiar, sometimes all too familiar, from ordinary life. It may not be that serious of a problem, for me, that I might be hallucinating most of my life, but we know that this can happen; and there are poor souls who are lost in mental illness and are plagued by fears and circumstances that are no more than the result of their hallucinations. The movie A Beautiful Mind [2001] about the mathematician John Nash overstated the degree of his illness -- he heard voices but did not have a problem hallucinating a non-existent roommate, as the movie has it -- but this was no more than an exaggeration, and we know that extensive and durable visual hallucinations are indeed quite possible. Even if we are free of mental illness, such hallucinations can be induced by various drugs. And, just as Descartes worried that his life might be a dream, we know that dream images can intrude into waking life at the boundaries of consciousness, i.e. in hypnagogic (while falling asleep) or hypnopompic (while awaking) states. Mental illness or drugs are not necessary for these phenomena. While waking up, sometimes I have heard sounds or voices, but it turns out that these cannot have been made within earshot.

The principle that causes are only sufficient to their effects, and that consequently a given effect can conceivably, and often actually, have multiple causes, means that radical possibilities arise. While Descartes is famous for the Deceiving Demon, the notion that may more often be associated with him is "Solipsism," the possibility -- it is hard to call it a theory or a doctrine -- that I am the only thing that exists -- a solus, "alone," ipse, "self." Thus, all of my perceptions, of the world and of my own body, are self-generated. I have hallucinated a world and a life simply for myself. Why I would have done that to myself is a good question. Perhaps, in my solitude, it is a diversion or an amusement that I have produced in the absence of anything more interesting to do. Eventually the illusion will break down, or self-terminate, and I'll be back to my self-contained, lonely existence.

Modern academic philosophers, whose (Analytic) education probably did not have all that much sympathy for René Descartes (disparaged by Gilbert Ryle for his "ghost in the machine" theory), like to formulate the Problem of Knowledge in a way more along the lines of science fiction. Thus, we are globally deceived about the nature of the world because our brains are actually in vats (BIV, for short) in the laboratory of some mad scientist, who is able to stimulate the brains in ways that simulate the perception of external reality. While this scenario generally does not leave venues of philosophical debate, a version of it can be found in an excellent and powerful movie, The Matrix [1999]. There, it is not a matter of our brain in a vat, but of our whole body in a vat, with the brain wired into a computer program (the "Matrix") that simulates an entire world and everything in it.

Whether the threat is the Deceiving Demon, Solipsism, the Matrix, Alien mind control, CIA mind control, or simply schizophrenic hallucination, this poses the challenge of Radical Skepticism, that we are lacking real and verifyable knowledge of even the most basic facts of our existence. This remains, to say the least, unsettling for epistemology. But it also comes down to more particular and immediate concerns about knowledge. If we defeat Radical Skepticism with the claim that certain propositions about empirical perceptions simply cannot be doubted, or doubted to a significant or sufficient degree to motivate the Problem of Knowledge, which is an approach that may stretch from John Locke to Ludwig Wittgenstein, we cannot forget that our perceptual knowledge is in fact fallible, not as a matter of philosophical Skepticism but as a practical matter of ordinary life. Did I lock the door? Turn off the lights? Turn off the stove? If I am a bird watcher, how many birds am I seeing in the tree? Some of them blend in, after all. That may be a trivial issue, but whether a burner on the stove, accidentally left on and unlit, is venting natural gas into the house may be a serious issue indeed. People are killed in gas explosions every year. If you report smelling gas to the gas company, usually someone will show up promptly, very promptly, to check it out.

In each of these cases, the issue is not the Problem of First Principles, in which we trace back a series of premises, but something a lot more immediate. The only remedy, practical and theoretical, for our uncertainty about the stove is to go check the stove. Reasonings a priori will not cut it. Of course, we might then suffer from a form of Obsessive Compulsive Disorder (OCD), in which we fail to credit the evidence of our own senses. After checking the stove, we may come to wonder if we had really checked the stove. Or, if we return to check that we have locked the door, and in doing so unlock it, we may then begin to wonder if we remembered to lock it again when leaving.

These ordinary and sometimes crazy-making situations are revealing. The only remedy, whatever our confidence, to the questions of whether the stove is off, the door is locked, or the windows are closed is to go check, or get someone else to do so (if they are reliable). Now, our inability to credit the evidence of our senses, or our memories, is not a failure of the perception as such, but of our recognition and recollection of it. This does call into the question the reliability of perception, but not in the radical way that the threat of the Deceiving Demon or hallucinations do. Nevertheless, it does mean that there is a practical procedure that we commonly use to deal with problems in our empirical knowledge, whatever the source of uncertainty:  check it out.

Furthermore, this is not just a matter of looking again. When we do look again, we are trying to verify our information, and we also wish to verify our confidence in the existence of the world and the general reliability of our perception knowledge. But verification is not the only rational tool that we have. The more elaborate version of "checking" on something is to check on things that must be the case if the original proposition is true, i.e. we look for facts that are logical consquences of the original question, especially if it becomes difficult or impossible to directly check the source of the matter. Thus, if we notice that the gas meter, outside the house, is not running, then a burner on our gas stove cannot be on. The state of the meter falsifies the proposition that the gas is running, on the stove or anywhere in the house.

This brings us the insight of Karl Popper that scientific method actually uses falsification and not verification for scientific theories. Since this avoided Hume's critique of induction, that it cannot verify anything, Popper thus nullified the persisent Problem of Induction, which has bedeviled philosophy of science for all of its existence. It is relevant to the question here because it means that rationality is not just a matter of looking for perceptions that ground empirical propositions, verifying empirical facts, and that cannot be seriously doubted, to provide a foundation for knowledge, but that we can also approach knowledge with a certain open ended procedure that, in principle, need never leave us at a loss. In fact, in ordinary life we rarely treat perceptual knowledge as something that cannot be doubted. When reasonable doubts, or even OCD, arise, we general know what to do about it. We cannot a priori rule out this procedure even for Solipsism or Radical Skepticism.

Thus, if there is a Deceiving Demon, or if Solipsism is the case, these would have particular logical consequences. If the world, including my body, doesn't exist, then as a matter of fact I don't need to eat, I don't need to sleep, I don't need to go to work, I don't need to worry about getting killed or injured, I don't need to pay any attention to other people, much less treat them with any consideration, as a matter of morals, manners, or law, and in fact I could actually do whatever I want, without taking seriously anything in this great hallucination (an effect reproduced in Bill Murray's movie Groundhog Day, 1993). Especially with Solipsism, I might want to do something that would necessarily break the illusion. Getting myself killed would probably accomplish that (although for Bill Murray it does not).

This sounds like a plan. Probably not to get myself killed, since I must have some worry, not that the world doesn't exist, but that it does. An easy and obvious thing to do would be trying to stop eating, or to test the world in many small ways, like tossing a ball toward the ceiling, or checking, as a character does in one science fiction story, that the weather looks the same out different windows of the house. I expect, however, that trying to stop eating is going to mean becoming hungry. And then very hungry. The question then becomes how far one is willing to push it, and before long we approach the circumstance, after all, that our life might be threatened. Even if we are doing this to ourselves, or the Deceiving Demon is causing us the pain, the experiment cannot be maintained, as a practical matter -- the Deceiving Demon, or the mad scientist in his lab, is torturing us into into complying with the hallucinatory world. It must mean a lot to him. But Hume himself denied that his Skepticism would make any difference in the "reasonings of common life":

Nor need we fear that this [Sceptical] philosophy, while it endeavours to limit our enquiries to common life, should ever undermine the reasonings of common life, and carry its doubts so far as to destroy all action, as well as speculation. Nature will always maintain her rights, and prevail in the end over any abstract reasoning whatsoever. [An Enquiry Concerning Human Understanding, Shelby-Bigge edition, Oxford, 1902, 1972, p.41]

Here, if we resolve that certainty is not possible in empirical knowledge, nevertheless we should acknowledge that the constant falsification of philosophical theses does give us some ground for confidence, not just in ordinary reasoning, but for extreme theses in philosophy. The more often we must submit to common sense in ordinary life, and the less that theories of Solipsism or Radical Skepticism offer anything relevant to what we constantly experience and must deal with, the less likely it begins to seem that the world is going to dissolve as an illusion. And indeed, the events of life every day are a veritable blizzard of falsifying evidence; and this circumstance is not lost on common sense, or even on philosophers. Solipsism and Radical Skepticism are not rational attitudes, and no sane personal really acts as though they are, not even academic philosophers. What helps is to recognize why they are not rational, and the answer that is rarely evoked is their constant falsification and disconfirmation. When one can sit through an entire epistemology conference of academic philosophers and never hear the word "falsification" even once, one may appreciate that they are missing something that ought to be obvious or, if not obvious, at least available. But, despite being, usually, Analytic philosophers, who are purportedly or presumably disciples of Hume and his annihilation of epistemic certainty (which we have seen disjunctively contrasted by Jacob Bronowski with knowledge itself), the neglect of falsification would seem to indicate a heart-felt yearning for certainty and verification after all. Falsification leaves the epistemologist unfulfilled, as F.A. Hayek noted about Hume's moral epistemology:

After all, it was over 250 years ago that Hume observed that "the rules of morality are not the conclusion of our reason." Yet Hume's claim has not sufficed to deter most modern rationalists from continuing to believe -- curiously enough often quoting Hume in their support -- that something not derived from reason must be either nonsense or a matter for arbitrary preference, and, accordingly, to continue to demand rational justifications. [The Fatal Conceit, The Errors of Socialism, University of Chicago Press, 1988, 1991, p.66]

Despite the inevitablity of uncertainty, and the volume of falsifying experience against Solipsism and Radical Skepticism, we remain, to be sure, curious about the Problem of Knowledge. As Hume says:

My practice, you say, refutes my doubts. But you mistake the purport of my question. As an agent, I am quite satisfied in the point; but as a philosopher, who has some share of curiosity, I will not say scepticism, I want to learn the foundation of this inference. [op.cit., p.38]

It is thus of some interest to see how Solipsism or Radical Skepticism has been treated, or would have been treated, in the history of philosophy. The most striking answer to the Deceiving Demon may be the actual acceptance of such a being by George Berkeley (1685-1753). Berkeley's God causes all our perceptions, including those of our own bodies. Matter does not exist. This was conceived as a great blow against atheism. Descartes would certainly consider it all a deception; but Berkeley likely would retort that materialism is not a teaching of religion, and so his God could only be mistaken for a deceiver by those who are already deceived about the nature of reality and have too easily accepted the metaphysical doctrine of materialism (which even perhaps most academic philosophers fail to recognize as a metaphysical doctrine).

As it happens, Berkeley's "idealism" (that only representations, Locke's "ideas," exist) is only a special case of a larger and older doctrine. That would be Occasionalism, which is first found in Islâmic philosophy and later turns up, and gets this name, in a successor of Descartes, Nicolas Malebranche (1638-1715). In Occasionalism, although the external world may indeed exist, absolutely everything that happens is directly caused by God. So, as in Berkeley, God does cause all our perceptions. It is just that the objects that our perceptions appear to represent do actually exist independently. In Islâmic terms, the idea that they might not exist would not impeach God as a Deceiver, but it would impeach him as a Creator, since the Qur'ân clearly states that God makes things from nothing just by saying, , kun, "Be," , fayakûnu, "and it is" [al-Qur'ân, Sûrah 2, Verse 117, Sûrah 36, Verse 82].

Nevertheless, a variation on Occasionalism preserves the reality of the external world without allowing that it exists independently. This would be what we find in Baruch Spinoza, where God is the only thing that exists and the reality of the external world is preserved by having it be part of God. This is also an idea found elsewhere, since such a doctrine can first be identified in the Advaita Vedânta of Ramanuja (1017–1137 AD). It is also a doctrine that can be called "pantheism," that God is everywhere, although there is an ambiguity that God could be "in" everything, or that God actually is everything. Spinoza and Ramanuja have it that God is everything, and that everything, including us, is some part of God. The result we can even see as a doctrine where the Deceiving Demon meets Solipsism, since there is only one thing that exists, and we happen to share in the existence of this one thing. Why Solipsism does not mean that we can control the world is then due to the circumstance that the One Being is more than we are and that the events of the world are due to its own larger nature. What we think of as our own independent existence is epiphenomenal and, in Spinoza, ephemeral. In Ramanuja, on the other hand, we exist individually because of our karma, and we have a more conventionally religious relationship to the one God, who is not an impersonal thing as in Spinoza (rejecting the personal God of Judaism), but a personal God of devotionalistic Hinduism. None of these variations is allowed in Islâm, since identifying our existence with God, in any way, was rejected as heterodox. The Sufi al-Ḥallâj was executed on those grounds.

Doctrines like those of Spinoza or Ramanuja would seem to co-opt and domesticate both the Deceiving Demon and Solipsism. What they also both have in common is a kind of what-you-see-is-what-you-get epistemology, where, despite the role of causality in perception, we are nevertheless identical to the existence that is present in our perception. If our existence cannot be doubted, then the existence of empirical objects cannot be doubted. As it happens, this shares a feature of Kant's argument against Solipsism. First of all, Kant rejects the assertion of Descartes that "the mind is better known than the body." This goes back to Hume's critique that introspection does not reveal a substantial, durable, and identifiable self. Instead, we have a bundle of images, sensations, and feelings in the mind. This was independently the resolve of Buddhist metaphysics, where there is no substantial self, and personal identity is a matter of a similar bundle of contents (the skandhas). Furthermore, Kant identifies the objects of introspection as secondary, or reflexes, to the primary representation of external objects. The mind itself generates the representation of empirical reality, which then exists as the world of phenomenal objects, whose spontaneous contents, while dependent on the mind, nevertheless have an immediate external, rather than internal, reference. So, structurally, we know the mind because first of all we know phenomenal objects.

Between Hume (or Buddhism) and Kant, Solipsism loses its epistemic force. The independent existence of the self cannot be established independently of the existence of external objects. This also undercuts Radical Skepticism. If external objects are available to us by inspection, and we are directly acquainted with them, the Deceiving Demon is not deceiving us about their existence. The argument of Descartes himself that the Demon cannot deceive us about our own existence (the cogito ergo sum) is transformed by Hume and Kant into an argument that he cannot deceive us about the existence of external things. To be sure, Hume believed that, "These ultimate springs and principles [of Nature] are totally shut up from human curiosity and enquiry" [op.cit., p.30]; and Kant famously limited our knowledge to phenomena, excluding the real character of things in themselves. So both of them limited human reason and human knowledge, leaving it undetermined whether classical metaphysical doctrines like those of Spinoza or Ramanuja are true. Or whether multiple substances exist independently, as denied by Buddhism or affirmed by the Dvaita Vedanta of Madhva (1238–1317 AD).

But there is no certainty here. Not only might hallucination deceive us about what we are seeing, but we know that the sources of many hallucinations, mental illness, can affect our own reason and make our own judgment defective. At least with these, we know that experience will not always cooperate and that deficiencies in reason can lead to problems in living our lives, with surprised reactions from other people, clashes with the authorities, or a dangerous urge to vote for Democrats. Once we recognize the falsifying evidence of experience (e.g. the lies and deceptions of Obama), we might even be recovering our reason. Thus, while the Deceiving Demon could have given us something like the world of The Matrix, hallucinations, in reality, give us the sad world of crazy people, whether in institutions or on the street. Unless present to a person deficient in reason, these situations are full of corrective evidence; and for the epistemologist who is not confabulating some good reason for his presence, lying on the subway platform or in the padded cell, Solipsism or Radical Skepticism are not really going to suggest themselves.

The reflections in this note were occasioned by sitting through the Rutgers Epistemology Conference, May 8-9, 2015.

Return to Text

Epistemology

Home Page

Copyright (c) 2015 Kelley L. Ross, Ph.D. All Rights Reserved