The Foundations of Value, Part I

Logical Issues:  Justification (quid facti),
First Principles, and Socratic Method

after Plato, Aristotle, Hume, Kant, Fries, & Nelson

Οὐκοῦν ἐπισκοπῶμεν αὖ τοῦτο, ὦ Εὐθυφρον, εἰ καλῶς λέγεται, ἢ ἐῶμεν καὶ οὕτω ἡμῶν τε αὐτῶν ἀποδεχώμεθα καὶ τῶν ἄλλων, ἐὰν μόνον φῇ τίς τι ἔχειν οὕτω, ξυγχωροῦντες ἔχειν; ἢ σκεπτέον, τί λέγει ὁ λέγων;

Then let us again examine that, Euthyphro, if it is a sound statement [εἰ καλῶς λέγεται -- if said truly], or do we let it pass, and if one of us, or someone else, merely says that something is so, do we accept that it is so? Or should we examine [ἢ σκεπτέον] what the speaker means [τί λέγει ὁ λέγων -- what the speaker says]?

Plato, Euthyphro 9e, G.M.A. Grube translation, Plato, Five Dialogues, Euthyphro, Apology, Crito, Meno, Phaedo, Hackett Publishing Company, 1981; Plato, translated by Harold North Fowler, Euthyphro, Apology, Crito, Phaedo, Phaedrus, Loeb Classical Libarary, Harvard University Press, 1914, pp.34-35

If you wish to justify your beliefs, you give reasons for them. You say that you believe proposition Z because of reason Y. Willingness to give reasons all by itself may be called rationality -- as long as the reasons are relevant [note].

You may also wish, however, to justify proposition(s) Y. So you cite proposition(s) X as the reason for believing proposition(s) Y. Aristotle noticed two things about this procedure. First, we must be able to describe how Y provides a reason for Z and X provides a reason for Y. Logic is just the description of how X implies Y and Z, or that Y and Z are logical consequences of X. Logic can prove Y and Z on the basis of X, but it cannot prove X without further reasons (premises), e.g. propositions U, V, or W. Logic alone can only secure the truth of its conclusions if the premises are true. Thus, it remains an inconvenient but unquestionable truth of logic that one can be perfectly logical and yet, with false premises, never say anything true.

If we continue to give reasons for reasons, from Z to Y, to X, to W, to V, to U, this is called the Regress of Reasons. Aristotle's second point, then, was just that the regress of reasons cannot be an infinite regress. If there is no end to our reasons for reasons, then nothing would ever be proven. We would just get tired of giving reasons, with nothing established any more securely than when we started. If there is to be no infinite regress, Aristotle realized, there must be propositions that do not need, for whatever reason, to be proven. Such propositions he called the first principles (ἀρχαί, archai, principia) of demonstration. Since principium -- from princeps, which is from primus and capio, "to take" -- already means "first" (primus), "first principles" is a redundant expression. This has happened because "principle" has come to mean a rule, perhaps a very basic rule, but not necessarily a first principle in the logical sense. Such a drift of meaning already had occurred in Mediaeval Latin, so that we get principia expanded into principia prima.

How we would know first principles to be true, how we can verify or justify them, if they cannot be proven -- the modern terminology is that they must be "non-inferentially" justified -- is the Problem of First Principles. Aristotle decided that first principles are self-evident, which means that we can know intuitively that they are true just by understanding them (by νοῦς, noûs, "mind"). This was widely believed to be the case for many centuries, especially since it seemed to fit perfectly the best example of a deductive system based on first principles: geometry, where all the theorems are ultimately derived from a small set of axioms. In other areas, however, self-evident first principles didn't seem to work very well. Ultimately, Hume and Kant decided that most first principles are not self-evident. Hume thought this meant that we couldn't really even know them to be true, although we had to assume that they were. Kant thought that we could know them to be true, even though they couldn't be proven and were not self-evident.

Kant said that such propositions were "synthetic a priori." "Synthetic" means that they can be denied without contradiction, i.e. they do not contradict themselves or anything else that is true. Now that is called "axiomatic independence." "A priori" means that they are known to be true independent of experience. Although there is not much agreement on whether Kant explained this successfully, Leonard Nelson (1882-1927), following a suggestion in Kant, later thought that there were really two questions involved: 1) the actual justification, what Kant called the quid juris, or matter of right, which we can set aside for the moment; and 2) the question of whether the first principles are simply there, i.e. whether we use them -- something never doubted by Hume and called the quid facti, or matter of fact, by Kant. Nelson understood that this could lead to a theory of knowledge much like Plato's. In recent philosophy, virtual nihilists (perhaps "non-cognitivists" is the polite expression) like Richard Rorty solve the Problem of First Principles by saying that we simply get tired of giving reasons. In a way this is true, or, more like it, we simply run out of ideas (rather quickly), but this does not solve any of the logical or epistemological issues of justification.

Aristotle had hoped that first principles could be discovered through induction. An inductive inference is the generalization that results from counting individual objects or events. The Problem of Induction is the realization that we can never know how many individuals or events we need to count before we are justified in making the generalization. Francis Bacon believed that empirical science uses induction, and his views influenced everyone's view of science until this century. But Bacon couldn't answer the objection that induction never proves anything. Nor could anybody else, and Hume twisted the knife in the wound.

In The Logic of Scientific Discovery Karl Popper shattered the conundra of induction and the verification of first principles by just dismissing them. Induction never had proven anything. Even Aristotle understood that, but it finally wasn't until Hume that the point was really driven home -- although even modern partisans of Hume don't seem to understand the result. Aristotle's problem of verifying first principles was resolved by Popper with the observation that deductive arguments can go in two directions: ponendo ponens, "affirming by affirming," or modus ponens, "the affirming mode":  if P implies Q, and P is true, then Q is true. This held out the mirage of verification, since all we have to do to secure things is somehow get P. But a deductive argument can also use tollendo tollens, "denying by denying," or modus tollens, "the denying mode":  if P implies Q, and Q is not true, then P is not true. This means that premises can be falsified even if they cannot be verified. If we cannot somehow get P, we might simply be able to ascertain that Q is wrong, especially when P is a general statement and Q its application to some individual.

Popper says that this is a form of Kantianism, and in fact it is rather like what Immanuel Kant says in the Critique of Pure Reason at A646-647 under "The Regulative Employment of the Ideas of Pure Reason." Popper also says that it is conformable to the Friesian variety of Kantianism, since Jakob Fries (1773-1843) and Nelson, returning to a consideration of the original problem in Aristotle, stoutly maintained that first principles cannot be logically proven/inferred. This explains many peculiarities in the history of science and is, indeed, the "logic of scientific discovery," although people like Thomas Kuhn have muddied the waters with other issues (some of them legitimate, some not) -- in The Structure of Scientific Revolutions.

In relation to Popper's understanding of the logic of scientific discovery, the point of interest is how Socratic Method uses falsification. The form of Socratic discourse is that the interlocutor cites belief X (e.g. Euthyphro, that the pious is what is loved by gods, or Meletus, one of the accusers of Socrates in the Apology, that Socrates is an atheist). Socrates then asks if the interlocutor also happens to believe Y (e.g. Euthyphro, that the gods fight among themselves). With assent, Socrates then leads the interlocutor through to agreement that Y implies not-X (e.g. the pious is both loved and hated by the gods). The interlocutor then must decide whether he prefers X or Y. That doesn't verify or prove anything, but one or the other is falsified: just as in science a falsifying observation may be itself rejected instead of the theory it discredits. Although Y often has more prima facie credibility, the heat of the argument is liable to lead the interlocutor into rejecting Y for the sake of maintaining their argument for X (though, with Euthyphro, Socrates does not agree with either premise). Socrates then, of course, finds belief Z, which also implies not-X. After enough of that, X starts looking pretty bad; and the bystanders and readers, at least, are in no doubt about the outcome of the examination.

The logical structure evident in Socratic Method was already being used in the form of Indirect Proof or the reductio ad absurdum argument in mathematics and elsewhere. In this, the contradiction of what is to be proven is assumed. It is then shown that this implies a contradiction with other assumptions or definitions in the matter. Logically, according to the Law of Clavius [(-P > P) > P], this establishes the truth of what is to be proven. Classic examples of Indirect Proof are the arguments discovered by the Greeks that there is no largest prime number and the square root of 2 is an irrational number.

Why it was always possible for Socrates to find another belief that would imply not-X is a good question. Plato had thought that true first principles were unavoidable. We use them always, even though we usually don't realize it and even when we may even think that we aren't [note].

Whenever Socrates questioned people, he had always been able to maneuver them into contradictions. Plato decided that this happened because Socrates could always find the way to bring out the conflict between everyone's false beliefs and the true principles that they inevitably employed somewhere. With a contradiction, however, which side is true and which is false, or whether maybe both sides are false, is an open question. Thus, Plato conceived of Socratic Method as the way to discover the truth on the principle that a completely consistent system of belief is possible only for the true first principles. Otherwise false beliefs would create contradictions with the unavoidable true principles.

As Hume said, whatever our philosophical doubts, we leave the room by the door and not by the window -- the same Hume who ruled out, not just miracles, but also free will and chance because he thought they all violated the same principle of causality that he so famously doubted. Nelson suspected that consequently, while Socratic Method did not really justify the first principles, it did provide a way to discover them. In a practical sense, that may be just as good as justification, and we can even say that Hume did more or less that very thing. It does always give Socrates, and us, a way to pursue the inquiry when we seem to reach an impasse.

Socratic Method thus shares the logic of falsification with Popper's philosophy of science and thereby avoids the pitfalls that Aristotle encountered after he formulated the theory of deduction and faced the problem of first principles and of induction. Both Socrates and Popper are left in a certain condition of ignorance because the weeding process of falsification never leaves us with a final and absolute truth:  we always may discover some inconsistency (or some observation) that will require us to sort things out again [note].

Our ignorance, however, may be of a peculiar kind. We may actually know something that is true, but the limitation will be in our understanding of it. Galileo was in a position to know that the sun was a star, but his understanding of what a star was still was most rudimentary. Isaac Newton had a theory of gravity that still works just fine for moderate velocities and masses -- the force of gravity still declines as the square of the distance -- but Einstein provided a deeper theory that encompassed and explained more. When it comes to matters of value that scientific method cannot touch, Plato had a theory of Recollection to explain our access to knowledge apart from experience, and his theory was actually true in the sense that we do have access to knowledge apart from experience; but Immanuel Kant ultimately provides a much deeper, more subtle, and less metaphysically speculative theory that does the same thing.

Plato's own (reductio) argument against Protagoras's relativism brings out this point. It is that relativism itself uses the very principle of absolute truth that it explicitly rejects. Relativism cannot even make its own claim without holding that it is above the relativity that it postulates. But if relativism is not absolute, then it allows its opposite, namely absolutism. Relativism could be true only if it were relatively relative, and that is not a denial of absolutes. Subjectivism has the same problem. If there is no knowledge (objectivity), how can we know that? If there is no objective truth, that would be an objective truth. So if subjectivism were true, clearly we couldn't know it, we would just have our subjective impression that no one would need to pay any attention to.

Be careful whenever a philosopher (like Hegel) begins talking about "reason" -- just as when Mr. Spock used to say in Star Trek, "Logic dictates." Logic doesn't dictate very much, and we must be very careful what someone means by "reason" when they begin invoking it. As you have seen, logic requires premises, and it ultimately cannot prove those premises. If "reason" means logic, it really only means consistency; but in principle, there could be an infinite number of consistent logical systems. Since Hume thinks that all first principles are established by sentiment, he properly asserts that, "Reason is and ought to be the slave of the passions." Other philosophers (Aristotle, Plato, Kant) may mean more by "reason" than consistency, but we must be clear exactly how that differs from logical consistency.

What would make the principles revealed by Socratic Method true is a deeper issue that will be considering in the following essay, "The Foundations of Value, Part II, Epistemological Issues: Justification (quid juris) and Non-Intuitive Immediate Knowledge."

The Reasoning of Sherlock Holmes

Value Theory

Epistemology

Home Page

Copyright (c) 1996, 1998, 2001, 2004, 2006, 2013, 2014, 2018, 2019, 2022, 2023 Kelley L. Ross, Ph.D. All Rights Reserved

The Foundations of Value, Part I, Note 1;
Philippa Foot, Rationality and Virtue

Philippa Foot begins her essay, "Rationality and Virtue," with the following discussion:

This paper is about the rationality of moral action, and so about a problem that is as old as Plato but which still haunts moral philosophy today. It is about the rationality of following morality: of refraining from murder or robbery for instance, and being faithful in keeping contracts and promises even where this seems to be against our interest and contrary to what we most desire. The problem of the rationality of morality arises most obviously over such actions and therefore has to do particularly with the virtue of justice, because it is here that self-interest and morality often seem to clash. Then Reason may represent itself as on the side of self-interest and the fulfilment of present desire; so unless it can be shown that acting justly is a necessary part of practical rationality, cynics like Thrasymachus will always say that there is no good reason to pass up an advantage for the sake of acting justly, and plenty of reason not to pass it up. [Moral Dilemmas and Other Topics in Moral Philosophy, Carendon Press, Oxford, 2002, p.159]

To me, this all seems to involve a false dilemma and is totally misconceived. Furthermore, the solution she later offers to her problem is similarly irrelevant. But if the meaning of "rationality" is simply to give logically cogent reasons for propositions in question, then it is quite neutral about content. But Foot's idea of "rationality" apparently is not neutral. "Rational" action, as we may gather from her statements, is prima facie only to pursue self-interest, so that acting "against our interest and contrary to what we most desire" is somehow "irrational." This does not follow.

Or perhaps we must narrow "reason" down to "practical reason." But then "practical reason" will be about some kind of practice, and any kind of practice involves a purpose. Although this could be quite general, and it could fold in Foot's stipulation or assumption that the purpose is always self-interest, the overtone we might get, from Kant, is that for what we call "practical reason," the practice and the purpose are moral. But that is a stipulation that ought to be explicit, while "practical reason" gets tossed around by Kant and Foot as though there were no other kinds of practical purposes. Cooking is a practice, with particular purposes, but these are rarely matters of morality (outside of vegetarianism). But it is usually rational. So the effect of Foot invoking an undefined "practical reason" is to obscure the principles of the business, if our business in particular is ethics and morality. And this leaves the topic of "rationality" muddled and obscure, with a mash-up of reason, self-interest, morality, and (implicitly) things like cooking; and this is not a good start for such an essay.

The question in Plato was not whether morality was "rational," but simply why one should be moral, or "just," δίκαιος. So Foot puts the problem in an odd way. Similarly, the problem is not, "that self-interest and morality often seem to clash," but that self-interest and morality are, as Jefferson says, always at odds -- morality essentially requires the limitation of self-interest. So Foot states the issue, again, in an odd way, as though she hasn't quite seen the point.

If someone asks us why we avoid murder and robbery, there is a very obvious reason we can give, which is that murder and robbery are morally wrong. This is just as "rational" as Thrasymachus claiming that it is "reasonable" to victimize others if they have something that we want. Indeed, we can be challenged to explain why we think that murder and robbery are indeed "wrong"; but that is a different question. The next step might be a traditional answer that is not too hard to find. The Golden Rule can be cited; and there is even an equivalent in Confucius: 己所不欲, 勿施於人, "What you don't want yourself, don't do to others" [Analects XV:23/24]. So if you don't want to be murdered or robbed, don't do it yourself. This in turn can require justification, but so far there is nothing "irrational," or unfamiliar, about the sequence.

But Philippa Foot never appeals to rules like that, or to their modern equivalents in Kant or the Utilitarians. Instead, Foot states the problem in terms of the "virtue" of "justice," just as Plato does in the Republic. Since Plato's argument in the Republic is that the just man will be happier than the unjust, perhaps I may be excused for some skepticism in the matter -- this is an argument from prudence, not from morality. Perhaps the unjust man doesn't care about being happy. Nietzsche points out that the pursuit of power may not result in happiness, just in power. And he dismisses the pursuit of happiness as some silly thing the English care about.

But Foot doesn't use Plato's argument. She has recourse to Aristotle and St. Thomas. She says elswhere, "It is my opinion that the Summa Theologica is one of the best sources we have for moral philosophy" ["Virtues and Vices," Virtues and Vices and Other Essays in Moral Philosophy, Oxford, 2002, p.2]. This is rather astonishing. And it is not hard to find weaknesses in her approach to the virtues. Thus, she says, in the same essay as this quote, that the Greek idea of virtue, inherited by Aquinas, was rather different from ours, "'The virtues' to us are the moral virtues whereas aretē and virtus refer also to arts, and even to excellences of the speculative intellect whose domain is theory rather than practice" [ibid.].

This seems to me quite false. Modern usage retains much of the Greek sense of ἀρετή as "excellence." There is the excellence of a knife, as with the qualities of a cook in the kitchen, i.e. the sharpness of a knife is its virtue, as the attentiveness of the cook ensures that the vegetables do not burn. The general meaning of a virtue is thus in relation to its purpose, which may or may not be a moral purpose -- it may even belong to inanimate objects, like the knife. While I do think that "vice" possesses a general moral tone, there is a boundary between moral and non-moral virtues, a boundary that must be specified in terms of a Socratic principle, but which can be ignored by Foot, where clarity is precluded by the indefinite clutter of multiple virtues.

Foot fishes "justice" out of the mass of virtues, thus ignoring the internal boundaries -- where, even as Aristotle denied the Socratic sense that "good" means one thing, Foot can simply ignore, under the slight-of-hand of the "practical," the "one thing" that will distinguish the moral from the non-moral purposes. This is either an evasion or a muddle.

Occasionally Foot brushes by key issues:

For one very difficult question concerns the relation between justice and the common good. Justice, in the wide sense in which it is understood in discussions of the cardinal virtues, and in this paper, has to do with that to which someone has a right -- that which he is owed in respect of non-interference and positive service -- and rights may stand in the way of the common good. Or so at least it seems to those who reject utilitarian doctrines. This dispute cannot be settled here.... [ibid., p.3]

But if "this dispute cannot be settled here," then Foot has simply tossed overboard some of the key and most basic issues in ethics, where rights and duties are essential and definining parts of moral action. Is it just that "One man should die for the people, that the whole nation should not perish"? [John 11:50]. The "common good" would say yes; and, ultilitarian or not, Foot seems to come down on that side of the matter.

In terms of the virtue of "justice," Foot sees the just man contributing to human flourishing and a productive society. This does sound like a utilitarian or collectivist principle. Foot ends up calling this "a species-dependent account of virtue" ["Rationality and Virtue," op.cit. p.170] or "autonomous species-dependent goodness" [p.174], which sounds like nothing so much as some kind of Darwinian socio-biological acccount of ethics -- and it seems no better than gibberish in comparison to blunt natural language statements like "it's wrong." And it is certainly not "autonomous" at the Kantian locus of the individual; and we might well be alarmed about what it would do, indeed, to individual rights -- in issue we see Foot breezily evading.

This all, not surprisingly, does have its roots in the full fallacies of the positivism and heteronomy of Aristotelian ethics, i.e. that the paradigm of "justice" is the actual practice and constitution of society, and that this is something external to conscience or the sort of rational or a priori knowledge posited by Plato or Kant. Indeed, not even Aristotle derived his ethics from the "species," but from the practice of paradigmatic individuals in society. It is Foot who adds what is really a utilitarian feature, of the collective, even as we are told that she rejects Utilitarianism.

And again, this says nothing about why unjust actions would be wrong and why we would have a moral duty to do what is right. The Nietzschean doesn't need to be interested in such goals. In Kantian terms, Foot construes morality in terms of hypothetical imperatives, i.e. if you want something, then you do this to get it. Foot refers to another one of her essays, "Morality as a System of Hypothetical Imperatives" [note, p.169]. While Foot admits that there are deficiencies in that treatment, it doesn't seem like she means that hypothetical imperatives are insufficient to explain moral obligation.

Man darf aber diese Anfrage des Willens bey der Vernunft nicht mit derjenigen verwechseln, wo sie über die Mittel zu Befriedigung einer Begierde erkennen soll. Hier ist nicht davon die Rede, wie die Befriedigung zu erlangen, sondern ob sie zu gestatten ist. Nur das letzte gehört ins Gebiet der Moralität; das erste gehört zur Klugheit.

One should not confuse this inquiry made of reason by the will with the one about the means of satisfying desire. Here the question is not how to attain satisfaction but whether satisfaction is to be permitted. Only the latter belongs in the field of morality; the former belong to prudence.

Friedrich Schiller, Über Anmuth und Würde, "On Grace and Dignity," Schiller's "On Grace and Dignity" in Its Cultural Context, Essays and a New Translation, Edited by Jane V. Curran and Christophe Fricker [Camden House, 2005, footnote, pp.156-157,207-208], translation modified.

But hypothetical imperatives cannot explain moral obligation, since the "imperative" is contingent on accepting the goal of the hypothesis -- or the purposes of the practice -- whether that is happiness, power, social flourishing, or Darwinian survival. If we have some kind of duty to accept the goal, this returns us to the original problem. And I don't see that Philippa Foot comes within shouting distance of an explanation. Hypothetical imperatives can provide "reasons" in the "best practices" of the "species" for acting justly, but they cannot show why these reasons involve moral duties. Foot completely misses the point there.

If Foot gives a thought to the nature of duty, which we don't see in the essay, perhaps she is presupposing the principle of St. Thomas, that "good is to be done and pursued, and evil to be avoided," bonum est faciendum et prosequendum, et malum vitandum [Summa Theologica, I-II, Question 94, Article II]. However, it cannot be an obligation to pursue all goods -- there are too many of them for that to be possible -- and so this fails to distinguish between imperatives and hortatives, although, to be sure, this is a distinction alien to almost all philosophers.

Foot shows some awareness with "questions of precedence" for goods [p.173], but there is no boundary there between duties and a hierarchy of preferences. I'm not sure that words like "duty," "obligation," or "imperative" even occur in the essay -- we don't even hear that "the fulfilment of promises" is a duty, although it does have a "precedence" [p.172]. It is as though Foot is allergic to the very idea of moral duty.

Why Foot does not even approach such questions may be from her focus on "justice" as a virtue. This allows for an avoidance of both moral principles, from Confucius to Kant, and the nature of moral obligation. Aristotle and St. Thomas, or even Plato, are not good places to start for that. So, instead of considering the simplicity of the Golden Rule, Foot wanders off into larger questions of social structure and the "species," which undermine the proper questions and answers. "Rationality" itself is more than a bit of a red herring.

Furthermore, Foot's focus on "justice" involves obscuring a distinction in a way that is an artifact of the Greek origin of the discussion. In Greek, δίκαιος, díkaios, has a broader meaning than what is now the conventional meaning of "justice" in English. It covers both legal and judicial meanings of "justice," which belong now to the English word, and the plain meaning of moral righteousness, which is actually more what is involved in Plato's Republic, where the challenge of Thrasymachus is about doing right in general.

This was not a problem in Chinese, where , , was moral righteousness on a basic level. For the judicial meanings of "just" and "justice," we get digraphs, namely 公正, gōngzhèng, and 正義, zhèngyì, respectively. The first one may be the most revealing, since 公正 is literally "judicial [i.e the judge's table] righteous." , zhèng, itself would the equivalent of ὀρθός, orthós, "right, correct, straight," in Greek -- rectus in Latin.

But Foot in effect perpetuates the Greek ambiguity, such that this eases the transition of her discussion from the basics of morality to an involvement of social, political, and historical questions, which poisons her conclusions with positivism and heteronomy.

Reviews

Philippa Foot on Nietzsche

Return to text

The Foundations of Value, Part I, Note 2

Plato also used the term ἀρχή, archê, like Aristotle, but didn't define it in terms of logic and the regress of reasons. In Plato's day formal logic didn't exist yet, so it wasn't until Aristotle that the regress of reasons could even be described.

Archê in Greek philosophy had originally been used to mean the elements.

Return to text

The Foundations of Value, Part I, Note 3;
Bronowski's Knowledge or Certainty

He grokked that this was one of the critical cusps in the growth of a being wherein contemplation must bring forth right action in order to permit further growth. He acted.

Robert A. Heinlein, Stranger in a Strange Land, 1961, Berkley Medallion Books, 1968, 1973, p.69, color added


At a dinner for [Werner] Heisenberg one night later, [Moe] Berg heard someone say that the war was all but lost for Germany. The physicist sourly responded, "Yes, but it would have been so good if we had won."

"Big-League Spy: Movie plays up catcher's role in WWII," by Michael Kaplan, the New York Post, June 17, 2018 -- former major league baseball catcher (1923-1939), Moe Berg, posing as a physics student for the OSS, to find out how close Heisenberg was to building an atomic bomb.

Jacob Bronowski (1908-1974) was a multitalented scientist in his own right but became widely known as a philosopher and historian of science, particularly for the innovative television documentary The Ascent of Man (1973), which he completed shortly before his untimely death. The twelfth chapter in the series was titled "Knowledge or Certainty," which gives us warning that, apparently, we cannot have both. The culmination of the point has Bronowski at Auschwitz, where he had lost relatives, standing on swampy ground where he says the ashes of many victims were dumped. At the end of the presentation, he pulls some of the muck up out of the water, the image of which is fixed with a freeze-frame.

Bronowski wants to make the argument, in a dramatic, emotional, and even inflamatory manner, that the desire, the quest, or the conviction of certainty in knowledge, or of the existence of "absolute knowledge," leads to moral atrocities like the Nazi extermination camps. This is a particulary heavy accusation to make, since it condemns most of the history of philosophy, when, from Plato to Descartes and beyond, certainty was considered part of the very meaning of knowledge. Since the practice of the Rationalists, especially, seemed to discredit their own claims about certainty, and some version of the Skepticism of Hume has come to be triumphant in modern philosophy, the question does arise about the status of the traditional meaning of knowledge. Bronowski was not unusual in regarding skepticism or uncertainty, not only as an epistemologically satisfying resolution, but as a morally laudable and edifying solution as well. Similarly, Bronowski's dismissal of "absolute knowledge," although perhaps directed specifically at Hegel, leaves the implication that, in the absence of absolutes, Relativism is also morally edifying.

Unremarked by Bronowski, however, is the irony and paradox inherent in his presentation; for Bronowski himself exhibits a moral certainty, and an absoluteness of judgment, that must be of at least equal or greater degrees than those of the Nazis, in whose motivations we often suspect some element of cynicism. Thus, while quoting Oliver Cromwell's famous exhortation to consider that one may be wrong, Bronowski does not consider for an instant the possibility that his own moral judgments about Auschwitz and the Nazis might be wrong. No Relativism when it comes to mass murder. This is, to be sure, quite proper; but Bronowski has some explaining to do and obviously is naively unaware and oblivious that he has sinned in terms of his own stark and dramatic, indeed melodramatic, performance. We have no proper account from him of the basis of his own moral judgments, especially keeping in mind the Skepticism of Hume himself, that morality cannot be justified by reason. "Be uncertain" cannot generate a substantive system of ethics; and "reject absolutes" is clearly not what he appeals to in the assertion of his own moral judgments.

There is clearly some kind of confusion involved. Indeed, Bronowski is doing little more than expressing a wish that lessons learned from the history of science, pace Hume, will carry over directly into moral edification. Specifically, the Uncertainty Principle of quantum mechanics seems to be presented as a lesson for uncertainty in everything. This does not follow, and, even if it did, it would prove too much. Thus, the originator of the Uncertainty Principle, Werner Heisenberg, remained a servant of the Third Reich -- which Bronowski does not mention -- although the degree of his commitment or guilt remains unclear and controversial [note]. And, if we are to be uncertain about everything, then Bronowski must consider that the Nazis might have been right, and that he, the Jews, and many of the rest of us properly should have perished in the noble purification of the German race.

That no sensible and conscientious person should entertain the latter possibility for even a moment means that the problem is not of certainty or uncertainty, but of right and wrong -- i.e. the content of morality rather than an epistemological meta-theory. Thus, when we consider, as I have above, that we are in a state of Socratic ignorance, and that all knowledge, or in Friesian terms all mediate knowledge, is fallible and corrigible, this does not give us the luxury of an unlimited suspension of judgment. Dithering and prevaricating about the motives and purposes of the Nazis delayed the onset of war enough that the result was a desperate business indeed, giving the Nazis the opportunity to murder millions, continuing even in the last days, as defeat was certain, seemingly entirely out of spite. Thus the warning of Machiavelli was vindicated:

...one should never permit a disorder to persist in order to avoid a war, for war is not avoided thereby but merely deferred to one's own disadvantage. [The Prince, Bantam Books, 1981, p. 20].

Timely action in 1936 or 1938 called for a level of conviction, certainty, and resolution that a philosophy of general uncertainty is not likely to provide. It is not clear how Bronowski, with his stated principle, could have urged action against the Nazis at a time when so many considered preparations for war to be "war mongering" and an unwarranted threat to the (stated) peaceful intentions of the Germans. Indeed, in the Apology, Socrates himself, while disowning the possession of any wisdom, nevertheless stakes his life, and loses it, by avoiding actions that he does not consider "to be good or just or pious" [35c]. Like Bronowski, Socrates expresses a level of certainty and resolution that raises questions about the coherence of his thought.

Thus, even as we must always be sensible of our fallibility, it is difficult but essential to judge when uncertainties must be put aside and forthright action must be initiated. This will depend on the content of our convictions, which themselves should specify when the stakes become high enough, with life and justice on the line, that action is necessary. "You should be uncertain" is not a helpful command or exhortation in those terms; and both Jacob Bronowski and Socrates were clearly untroubled when, for them, a moral threshold had been reached. In fact, they are admirable precisely for that. Yet they appear never to have been cognizant of the paradox of their own virtue. It is too late for us to overlook that now.

Bronowski's own argument allows him to go beyond the mere use of a "Be uncertain" principle. His description of quantum uncertainty is that it provides for a certain amount of "tolerance," a term that he borrows, not from physics, but from engineering, with a meaning that is then insensibly transfered to a moral and political application. This does not follow, and it commits the very non sequitur explained by Hume between inferences of fact and value, not to mention that physics cannot really supply the concepts that will be used in moral or political propositions. Even if we let him get away with that, it still leaves the nature of the limits of tolerance without any principle for their determination. Bronowski speaks as though the moral issues are self-evident, and this displays a completely naive and uncritical understanding of ethical matters. Nevertheless, if we allow that there are moral limits to tolerance, and that these limits can be defined in moral and political terms, which will be semantically and axiomatically foreign to things like Schrödinger's Equation, then we can say that Bronowski's theory can match his practice, which is that, at some point, he will draw an absolutist line and say, not with Cromwell, but with Luther, Hier stehe ich, ich kann nicht anders. That is more the tone and flavor that we get seeing Bronowski at Auschwitz.

Socratic Ignorance in Democracy, the Free Market, and Science

Epistemology

Philosophy of Science

Reviews

Return to text

Bronowski's Knowledge or Certainty,
Note on Werner Heisenberg

The standard apologetic for Werner Heisenberg is that he pretended to work on an atomic bomb for the Nazis but was neither enthusiastic nor really serious, probably deliberately slow-walking or even sabotaging the project.

There now seem to be a number of bits of information that contradict this defense. One is from after the War, when the British had arrested German scientists and were housing them in an English country house -- a house that was thoroughly bugged to record all their conversations. The British had done this with a lot of German officers who had been captured during the War, when their unguarded conversations had betrayed knowledge of German plans, war crimes, etc. The same thing was done with the German scientists. This was all kept secret for many years afterwards.

When the atomic bombs were dropped on Japan, and the events announced to the public, the Germans read about it in the newspapers that the British supplied. Heisenberg was surprised. He had not been unenthusiastic about the German atom bomb project, and he had not sabotaged it. He had genuinely come to believe that a bomb could not be built. Hence his surprise. So from this we learn that Heisenberg was serious about his weapons work and was not just playing along.

Now we learn that the United States had an OSS (Office of Strategic Services) spy who got close to Heisenberg during the war. This was Moe Berg, who for many years had been a professional baseball player and then coach. But Berg also had a degree from Princeton in "classical and romance languages," to the point where he would practice Sanskrit during baseball games, and while playing he earned a law degree from Columbia. Not your average baseball player.

Casey Stengel, the legendary Yankess and Mets manager, once said, "Moe Berg was the strangest man to ever play the game of baseball." ["Big-League Spy: Movie plays up catcher's role in WWII," by Michael Kaplan, the New York Post, June 17, 2018]

In 1943, Berg was recuited by William "Will Bill" Donovan, the head of the OSS. His first responsibility was the Balkans, and he parachuted into Yugoslavia to check out the resistance groups. He thought that Tito was the better bet, but it is not clear to what extent he may have been deceived by Tito and other sources about the other resistance group, that of Draža Mihailović. There was a systematic disinformation and smear campaign against Mihailović, not just from Tito but with the cooperation of Western lefists and the Soviet agents who had actually infiltrated the OSS, that Mihailović was a fascist who was collaborating with the Germans. This was all lies, and the result was that Mihailović was eventually captured and executed by Tito, resulting in a Communist Yugoslavia. So it would be important to know the details of Berg's reports.

Later in 1943 Berg was redirected to the investigation of Italian scientists, to find out what he could about missile and nuclear technology and tempt them into defecting, or kidnap them. In 1944 this shifted more exclusively to nuclear work, and specifically to find out what kind of progress Werner Heisenberg had made on a German atomic bomb. Berg impersonated a physics student -- his German (he spoke several languages -- not just Sanskrit), and his knowledge of physics, must have been very good. In December 1944, Berg heard that Heisenberg was going to give a lecture at Zürich, in neutral Switzerland, where American agents, like Allen Dulles (oddly enough with the help of C.G. Jung), were working:

At a dinner for Heisenberg one night later, Berg heard someone say that the war was all but lost for Germany. The physicist sourly responded, "Yes, but it would have been so good if we had won." [ibid.]

Berg was armed and authorized to assassinate Heisenberg, if he thought the man consituted a threat. He actually left the dinner talking to Heisenberg, but he didn't think there was a threat and so let the chance pass. Meanwhile, except for the Battle of the Bulge, the War was won.

From these indications, which have only slowly come out since the War, we know that the apologetic for Heisenberg is not correct. What we don't know is the depth of Heisenberg's adherence to Nazi ideology. He may have just been a German who wanted Germany to win a war, after losing the last one. This was not unusual among Germans. If it went further than that with Heisenberg, chances are we will never know -- there do not seem to be attested public statements from him about the Nazis, race, or the Jews. Heisenberg never candidly discussed what he thought during the War, and we don't seem to have any testimony about it from people who knew him. This is not surprising, and we probably have the phenomenon of people thinking that Heisenberg was too historically important to allow him to be discredited. Much the same thing happened with Martin Heidegger, whose misdeeds were whitewashed by people who should have known better, and who has only been slowly exposed (like Heisenberg) with the (literal) passage of decades.

We don't know how much Jacob Bronowski knew about Heisenberg before his untimely death. What we know now, however, would seem to explode the idea that the Heisenberg Uncertainty Principle has any chance of working as a moral principle. Both Heisenberg and Heidegger -- and of course many others -- are exposed as, at the very least, gravely deficient in moral and political judgment. This provides no grounds for Bronowski's argument about "knowledge and certainty."

Return to text

The Reasoning of Sherlock Holmes

"I have already explained to you that what is out of the common is usually a guide rather than a hindrance. In solving a problem of this sort, the grand thing is to be able to reason backward. That is a very useful accomplishment, and a very easy one, but people do not practice it much. In the everyday affairs of life it is more useful to reason forward, and so the other comes to be neglected. There are fifty who can reason synthetically for one who can reason analytically."

"I confess," said I, "that I do not quite follow you."

"I hardly expected that you would. Let me see if I can make it clearer. Most people, if you describe a train of events to them, will tell you what the result would be. They can put those events together in their minds, and argue from them that something will come to pass. There are few people, however, who, if you told them a result, would be able to evolve from their own inner consciousness what the steps were which led up to that result. This power is what I mean when I talking of reasoning backward, or analytically."

Sherlock Holmes & Dr. Watson, A Study in Scarlet, Sir Arthur Conan Doyle, 1887 [Sherlock Holmes: The Complete Novels and Stories, Volume I, Bantam Books, 1986, p.100]

"...The only point in the case which deserved mention was the curious analytical reasoning from effects to causes, by which I succeeded in unravelling it."

Sherlock Holmes, The Sign of Four, Sir Arthur Conan Doyle, 1890 [Sherlock Holmes: The Complete Novels and Stories, Volume I, Bantam Books, 1986, p.109]

While the immortal Sherlock Holmes created a mythic gold standard for reasoning, anyone looking for a sensible explanation of the form of his reasoning will come away with very confused and erroneous ideas. One point made in the statements quoted above, from the two earliest Sherlock Holmes stories, that there is a significant difference between reasoning from causes to effects and from effects to causes, is quite right; but nearly everything else is wrong.

Most importantly, reasoning from effects to causes is not a "very easy" accomplishment which somehow merely comes to be "neglected." It is in fact very difficult, and its difficulty underlies one of the foundational problems in Modern Philosophy, the Problem of Knowledge, as this was exposed by René Descartes. Reasoning from causes to effects is not necessarily "more useful" in everyday affairs, but it is definitely easier.

The difference in use and ease is the result of an important logical characteristic of causality:  causes are sufficient conditions to their effects, while effects are necessary conditions to their causes. This means that sufficient conditions make things happen, while things will not happen without their necessary conditions (conditiones sine qua non). What goes along with this in the world is that usually many different possible causes can produce the same effects. A dead body, after can, can be the result of natural causes, accident, or homicide, and there are many, many possible ways in which any of these things could happen. Given a set of circumstances, however, and a knowledge of the laws of nature, what can happen is usually quite restricted. If I drop something, it will fall. If I turn on the oven and put some food in it, the food will cook.

In everyday life, we frequently reason from causes to effects and from effects to causes. This is "forward" and "backward," in Holmes' terms, not in any sense related to logic, but simply in relation to time. Causation moves forwards in time. Given effects, we are going to need to think backwards in time. This owes little to the nature of logic but much to our knowledge of the laws of nature. As children, we do not know that the hot iron will burn us. Once we discover, we will not make that mistake again. But if we discover a burn on our clothes, we may not know immediately where that burn came from. The iron? The stove? The radiator? A match? We must imagine the universe of all possible sources of burns. Then we must, as Holmes himself says, begin eliminating the possibilities. But sometimes this simply cannot be done. There may not be enough evidence.

Another circumstance that makes reasoning one way easier than the other is indeed the connection to time. If we do not know the effects that certain causes will produce, we can always set the causes in motion and then see what happens. Given the effects, however, waiting around to see what happens will not help. We would need to run time backwards. This can only be done in the imagination. Given the possible causes we imagine for given effects, it may happen that they all get eliminated by the evidence. Then we must start over and try to imagine new possibilities. In such a case, we may test our new ideas, to see if such causes do produce such effects. We find Holmes doing this himself, most strikingly in "The Problem of Thor Bridge" [Sherlock Holmes: The Complete Novels and Stories, Volume II, Bantam Books, 1986, pp.564-587], where an experiment produces a chip in the stonework of the bridge, demonstrating that the supposed murder could have been a suicide, as the homicidal weapon could have been removed from the scene by the presumed victim, even in death.

Holmes' explanation of the problem involved in his reasoning, in dealing with causation, is quite right, but we get a very poor sense of the difficulties that attend the problem and a very deceptive impression that this is more a matter of logic and method than of the nature of our knowledge or of the case. The impression is reinforced by his use of the terms "analytical" and "synthetic," as though these are forms of reasoning. Actually, there is no real meaning of "analytic" or "synthetic" that is relevant to his point. "Analytic" means "take apart," and "synthetic" means "put together." "Analytic" usually concerns taking apart, unpacking, or distinguishing the parts of meaning, initially of words but then possibly of larger constructions, like theories. The contrast between analytic and synthetic was originally made by Kant, for whom analytic truth depends on the meaning of a sentence, while synthetic truth depends on some circumstance over and above the meaning of a sentence. None of this is related to Holmes' usage.

Since Holmes uses "analytical" to mean reasoning from effects to causes, the implication is that causes can be discerned simply by taking apart the effects. This is generally impossible, since the effects say nothing about the laws of nature, and often very little about the circumstances, that produced them. A bit more like "analysis" would be reasoning from causes to effects, where, if one possesses knowledge of the relevant laws of nature, the values of the case can be inserted, and a prediction made about the result from an application of logic (and/or mathematics) alone.

What Holmes (or Sir Arthur) may have had in mind were Mediaeval ideas that effects somehow actually contain their causes. This would be nice. It reflects a moment of wishful thinking in the history of philosophy. Were it the case, then one would be able to recover the causes of events by a simple analysis of the effects. This could explain why Holmes thinks that this sort of reasoning, "backward, or analytically," is "very easy," when it is actually nothing of the sort.

As is often the case in science and philosophy, talking about how you reason very often turns out to be very different from the way you do reason. In the Sherlock Holmes stories, we thus often find Holmes struggling to imagine what causes could possibly have produced particular effects. In all cases, it is the imagination working, in conjunction with Holmes' general knowledge, that produces the answer, not a simple analysis of the crime scene. The explanation of the nature of his reasoning is of interest for what is right in it and what is wrong, and for comparison with the other general issues of knowledge and reasoning considered above.

Above we see Charing Cross Station in London in 1895. The "Cross" is the spire monument placed nearby by King Edward I to commemorate where the body of his wife, Eleanor of Castile (descendant of Alice of Vexin), rested on route to burial at Westminster Abbey in 1290. The monument in front of the railroad station is a reproduction and is not at the original site. Holmes and Watson often left on their adventures from the Station, which is still in use. It is also regarded as the geographical center of London. The "American Exchange," to lower right, is mentioned in A Study in Scarlet. Below is the station in 2006. The American Exchange is gone, but the building looks substantially the same (now a "Thistle Hotel"), except for the roof:  evidently the fireplaces have been removed and so the roof and upper story have been rebuilt.

In recent years on television in both Britain and the United States there have been shows featuring updated versions of Sherlock Holmes and Watson. In Britain the series has been called Sherlock, starring Benedict Cumberbatch as Holmes and Martin Freeman as Doctor John Watson. Four series of three-parts have been produced at intervals between 2010 and 2017, with an extra episode in January 2016 (as a period piece that turned out to be from Holmes' imagination). In the United States, the series is called Elementary, starring Jonny Lee Miller as Holmes and Licy Liu as a now female Doctor "Joan" Watson. There have been seven seasons of Elementary, amounting to 154 episodes, recently finished in August 2019, making Miller the actor who has played Holmes the most in television and film history.

The idea of updating the Holmes stories goes back, at least, to 1942. Two period Holmes films had been made in 1939 starring Basil Rathbone (1892-1967) as Holmes and Nigel Bruce (1895-1953) as Dr. Watson. This was inspired casting, and Rathbone and Bruce may have been the best Holmes and Watson ever, with the unintentional irony that the bumbling Bruce, almost a dead ringer for Arthur Conan Doyle, might be taken to reflect the credulity of Doyle for spiritualism and other paranormal frauds. The initial 1939 film of the Hound of Baskervilles, although fondly regarded by critics, nevertheless paid little attention to period set design and badly rewrote the ending of the book. I cannot regard it as a great success. When Universal Studios took over the franchise from Fox in 1942, they decided to do the update, using the World War I German spy story of "His Last Bow" (from 1917) to put Holmes in conflict with Nazi Germany. This made some kind of sense, but Universal then went on to produce 12 films, until 1946, that often did not rely on canonical sources and that reflect poor "B" movie production values. Rathbone and Bruce stuck it out to the end, with their talents perhaps largely wasted.

While Sherlock begins with a wonderful bit of Holmes recognizing, as in the canon, that Watson had been in Afghanistan (or Iraq), I have found Cumberbatch's portrayal of Holmes troubling. Sherlock Holmes had no difficulty dealing with a person of any station or personality, which suited his ability to pass in disguise at any level of society. And he could be civil to people of whom he deeply disapproved. On the other hand, Cumberbatch's Holmes explicitly suffers from some form of autism, or "autism spectrum," has difficulty dealing with people, and frequently cannot avoid being rude and insulting. I lost patience with the series when Holmes insults the judge while testifying in open court and is sent to jail for contempt. This would make perfect sense for Sheldon Cooper, as when it actually happened to him in an episode (of The Big Bang Theory), but it is incredible for the canonical Sherlock Holmes. Indeed, there is not a single story in the Holmes canon where we find him testifying in court, and this might be explained in the terms that Holmes generally allows the police to take credit for his work; but it is nevertheless unbelievable that the earnest and conscientious Holmes would behave the way that Cumberbatch's Holmes does. But, admitedly, Cumberbatch may make a better Holmes than he does the Sikh villain Khan Noonien Singh, in Star Trek Into Darkness [2013].

Jonny Lee Miller's Holmes is nowhere near as bad, although he too has some personality problems that exceed the canon. Nevertheless, he seems to have the canonical ability to deal with any person in an appropriate way. His principal difficulty, with which the series begins, involves drug addiction. The canonical Holmes, of course, resorted to cocaine in periods of boredom. He was in no legal difficulties over this, at a time when there was no drug probibition and people, properly, regarded addiction as a medical and not a legal problem; but Dr. Watson found it troubling and dangerous, counseled Holmes against it, and eventually got him (like Freud) to stop. Subsequent Holmes treatments have been inclined to make more of this, as in the 1974 book, by Nicholas Meyer, and 1976 movie The Seven-Per-Cent Solution, where Holmes is enough of an addict as to suffer from delusions or hallucinations, attributing criminal designs to the poor, innocent mathematician Dr. Moriarty.

Of greater interest in Elementary is what the series did with Dr. Watson. The "Joan" Watson of Lucy Liu is introduced as Holmes' "sober companion," to help him cope with recovery from his addiction. Like the canonical Watson, Liu had been a surgeon, but with a military background and Afghanistan left out of the equation. Instead, unlike John Watson, Liu's Watson was a serious, top-flight surgeon, who retired from practice after being shaken from the loss of a patient. From being Holmes' "sober companion," Liu's Watson becomes his apprentice in pursuit of her own career as a detective. This, of course, adds a layer missing in the canon. John Watson never became a detective, or had any particular luck in emulating Holmes' observations or deductions. When trusted by Holmes with some independent task to help him, Watson generally is out of his depth and is sometimes rebuked by Holmes for his failures.

What happens with Liu's "Joan" Watson I ultimately found annoying. We find out how great her Watson's abilities are. There is one episode, involving "angel of death" murders in a hospital (where we find our friends because Holmes is engaged in "beating the subjects" in the morgue, as reported in "A Study in Scarlet"), where Liu encounters a former colleague who has a patient in need of heart surgery ("Lesser Evils," Season 1, Episode 5, aired November 1, 2012). Liu recognizes signs that the patient has a case of endocarditis, an infection in the heart, which would make the surgery dangerous. Her colleague dismisses Liu's diagnosis and concern and prepares to do the surgery. This is stopped when Liu, without authorization, orders tests that expose the reality of the condition. Throughout the series, Liu's medical knowledge frequently contributes to a case, much more so than the canonical Watson, who rarely, if ever, notices anything independently. When Holmes asks Watson for his medical opinion, it is usually to confirm something he has already seen. Actually, this is what happens in the morgue with Holmes and Joan.

We know that John Watson went back into medical practice twice, first after marrying the client in The Sign of Four, later, without any details of date or even the name of his wife, from references by Holmes in later stories, all, I think, in The Casebook of Sherlock Holmes [1927]. Watson did not go back into surgery, but apparently just into general practice out of a home physician's office. Liu's Watson, however, never seems to be tempted back into medicine. Instead, she wants to be a mother -- and accomplishes that through adoption. This is a rather disappointing cliché. I have nothing against women wanting to be mothers, but I would wonder why "Joan" Watson wants motherhood but at the same time has no interest in saving lives through medicine. After all, being a detective in a murder case is little better than being a pathologist:  the patient is already dead on both cases. Indeed, with no disparagment of pathologists (I grieve over the retirement of Dr. Jan Garavaglia), Holmes and Watson deal with forensic pathologists quite a bit at the New York Medical Examiner's office -- where Watson, again, can use her medical knowledge, but not remotely in a way comparable to the specifics of a surgeon's skill.

Indeed, I long expected the story line of Holmes discovering that Watson originally lost her patient through no fault of her own, but that something was done to deceive her, with the death of the patient actually being a case of murder. I don't seem to have been alone in expecting this. A friend of mine thought that Holmes had already investigated and solved the case and was just waiting for an opportune moment to move on it. But this never happened, and later we were given some details that seemed to preclude the possibility that Watson had not been at fault.


Even so, this does not speak well of Liu's Watson. One of my favorite episodes in the long running series Scrubs [2001-2010], which was mainly filmed at the closed North Hollywood Medical Center, on Riverside Drive in North Hollywood, California (next to the Tujunga Wash and a block away from the iconic Val Surf in the San Fernando Valley), where my own father had once been a patient and where I once took my mother to the emergency room, was when the supervising attending physician (Dr. Cox) told the young interns that eventually they would kill a patient ("My First Kill," Season 4, Episode 4, air date, September 21, 2004). Sober and sage advice. The follow-up was when the physician himself later lost a patient and had difficulty following his own advice on how to deal with it.

So it would not have been an easy business for "Joan" Watson to deal with the loss of her patient, even if it was not her fault; but it is the sort of thing that a conscientious physician, let alone a surgeon, must be ready to deal with. Another fine episode of Scrubs was when the lead surgeon character ("Turk," played by Donald Faison, memorable from Clueless [1995] and then in Emergence [2019-2020]) lost a patient during surgery because of an undiagnosed aortic aneurism, the very thing that killed actor John Ritter (1948-2003), who had actually been a guest star on the series. On the show, they rewound time to play the whole surgery over again, this time with the aneurism having previously been diagnosed. Nevertheless, the patient dies anyway ("My Butterfly," Season 3, Episode 16, air date, March 16, 2004). I admired shows like that for providing sober doses of medical reality.

So what can we say about a "Joan" Watson who abandons a promising career, and all of her great ability, because of losing a patient? This does not sound like the Right Stuff. And what about the writers? Didn't it occur to anyone that a dead surgery patient could mean a murder case? Wasn't Elementary largely about murder cases? What is so great about being a sidekick detective and a mother next to being a great surgeon (and, perhaps, a mother also)? I find these story and character choices disappointing, and anticlimactic resolutions to the imaginative innovations in the series. And I like Lucy Liu, although evidently others did not.

"Joan" Watson giving up surgery for the equivalent of pathology reminds me of another Scrubs episode, where an intern, "Doug Murphy," who consistently kills patients, discovers that his true calling is pathology ("My Malpractice Decison," Season 4, Episode 9, aired November 9, 2004). But Doug had been a terrible doctor, confusing patients, and their surgeries, and giving the wrong doses of medicine. Everyone was better off with him in pathology; but nothing like that was the case with Watson.

The tiles with the silhouettes of Holmes shown on this page, as at right, are those used in the Baker Street subway/underground station in London -- as the Holmes statue above stands outside the station. The iconic Meerschaum pipe and the deerstalker hat, by which these images are recognized, are actually not attested in the Holmes stories. They were developed early by actors in stage plays based on the Holmes stories. In the canon, Holmes typically smoked a cheap clay pipe, which might be bought for as little as a farthing.

Since smoking is now politically incorrect, the pipe is sometimes eliminated altogether from Holmes memorabilia, such as refrigerator magnets. I find this outrageous, and I felt some vindictive Schadenfreude when a tourist store on Baker Street that had been selling the pipe-less magnets went out of business.

Returning to Baker Street in 2019, a curious feature of the current stores selling tourist memorabilia is that they seemed to be missing Holmes items altogether. I can only imagine as the reason for this that the nearby "Sherlock Holmes Museum," run by the "Sherlock Holmes International Society," has somehow imposed a monopoly on such items -- otherwise the other stores, whose only reason for being there, of course, is the Holmes association, would have them.

The Holmes "Museum" has effected other distortions. The address of the "Museum" is 221, as we might expect, but that number does not occur at the proper place on the block. We expect "221" to be opposite "220" and "222." That is where I looked for it in 1970, when I first visited London. Of course, there was no such address. A bank across the street was at "119," and it extended into the space that "221" would have occupied. It seems characteristic of Arthur Conan Doyle to use something as much like a real address as possible. After all, Watson first heard of Holmes in the Criterion Bar on Pickadilly Circus, which is still there (although now an Italian restaurant); and he first met Holmes at St. Bartholomew's, "Barts," Hospital, which is still there also.

But now it looks like an attempt is being made by the "Museum," whose original proper address was 239 Baker Street, to eliminate the anomalous addresses in between 119 and "221," and many shops in the block seem to no longer have visible addresses at all. I find this all very curious. And it would be alarming if the Holmes "Museum" has enforced some kind of Holmes monopoly on the other tourist shops and is trying to arrange addresses to suit its position. Otherwise, the "Museum" has actually done a good job of reproducting what the rooms of Holmes and Watson would have looked like, in a building that certainly fits the period type that Conan Doyle envisioned.

Another curious development is that down Baker Street, south of Marylebone Road, there used to be a hotel called the "Sherlock Holmes Hotel." I stayed there twice. Now it has changed ownership, changed its name, to the "Holmes Hotel," and even closed its entrance on Baker Street. Its official address and entrance is at 83 Chiltern Street, an unassuming street a block over from Baker. Just what fans of Sherlock Holmes would want! So I don't get it. Was the hotel forced to drop its full name and its Baker Steet address? I wonder.

The full Sherlock Holmes name does remain on a pub, although very far from Baker Street -- in fact down at 10 Northumberland Street, near Charing Cross Station. The building was originally the Northumberland Hotel and thus sounds like where Sir Henry Baskerville stayed in the The Hound of the Baskervilles. If so, it doesn't need to be near Baker Street to have strong Holmes associations.

On the same 2019 London visit, I went to get lunch at the nearby Ship and Shovell pub, which I had previously patronized, in great measure because it is named after Admiral Clowdisley Shovell. To my astonishment, they had dropped "bangers and mash," i.e. sausages and mashed potatoes, from their menu. The bartender actually said, "They get rid of all the good stuff" -- in favor of things like "Crayfish Linguine" and "Hummus & Flatbread," which we all know are traditional English pub dishes(!).

So I went over to the Sherlock Holmes, where I had never actually eaten before. They were showing Sherlock Holmes and the Secret Weapon [1942] on television monitors. And they had "sausages and mash" on the menu, which I ordered as "bangers and mash" from the bargirl. She didn't know what that was. From her accent, she didn't seem to be English -- although a sweet and lovely girl. As this was getting straightened out, a fellow standing next to me at the bar pounded his fist and said that "bangers and mash" is "exactly what it is called!" A few days later I ordered bangers and mash, right off the menu as such, at The Moon Under Water pub in Leicester Square, without confusion -- and with an extra sausage. Such is modern London.

The English Breakfast

Sherlock Holmes quote on Afghanistan

Sherlock Holmes quote on Natural Explanations

Sherlock Holmes quote on Hope in Providence

Sherlock Holmes quote on Purpose

Sherlock Holmes quote on Islam

Sherlock Holmes quote on Justice

Sherlock Holmes quote on Feeling and Action

Sherlock Holmes quote on Loafing

Epistemology

Reviews

Philosophy of History

Home Page

Copyright (c) 2005, 2008, 2010, 2012, 2016, 2017, 2019, 2020, 2023 Kelley L. Ross, Ph.D. All Rights Reserved

The Foundations of Value, Part II

Epistemological Issues: Justification (quid juris)
Non-Intuitive Immediate Knowledge,
and the Friesian Trilemma

after Kant, Fries, & Nelson

In the previous essay, "Justification, First Principles, and Socratic Method," the Problem of First Principles was in the end addressed through Socratic Method in terms of the Kantian quid facti, i.e. that we can seek to discover what the First Principles are. However, the deeper question of the Kantian quid juris, what really makes the First Principles true, or what justifies them, was temporarily set aside. At this point, the answer to that issue can be found in one of the most pivotal doctrines of the Friesian tradition:  the theory of non-intuitive immediate knowledge. This is a profoundly paradoxical doctrine, which is at variance with contemporary notions of immediate knowledge and intuition.

Indeed, we must violate the widespread definition of knowledge in contemporary philosophy. That is that knowledge is "justified true belief." I have previously criticized this definition (as, indeed, others have done, in their own ways), not only in the terms that the person who proposed it, Plato, did so only to reject it, but that for the "justified" part to be strong enough to certify the "true" part, it must already count as knowledge -- leaving us with a circular definition, i.e. that knowledge is "true belief justified by knowledge." However, in the present case, the challenge is more direct. "Non-intuitive immediate knowledge" is justified and true, but does not include belief.

While that seems strange now, there should be nothing more familiar in the history of philosophy. It is the functional equivalent of Plato's theory of Recollection. Thus, for Plato, if knowledge is something that is forgotten (with rebirth) but then remembered, it does not consist of any beliefs until it is remembered.

Once it is remembered, and we see it in the form of beliefs, there is the question whether those are veridical or confabulations; but that problem has already been handled by Socratic Method, which winnows out the quid facti from what is given. This is an unending process and bears comparison to the "hermeneutic cycle."

The quid juris is not about content, but about the cognitive ground to which the content is referred. That is part of epistemological meta-theory. I do not think there has been anything else quite like it in the history of philosophy; and so, while everyone might be familiar with Plato, it is not taken seriously in its own terms, and there has been no functional equivalent in subsequent thought. Intuitionism is still popular (e.g. the Ethical Intuitionism [2005] of Michael Huemer).

Actually, true immediate knowledge, whether intuitive or non-intuitive, is as such always without belief. Belief requires concepts, understanding, and propositions. Those are all part of mediate knowledge, and they rest on the conventions of language.

Non-intuitive immediate knowledge is the category to which Fries and Nelson assign the knowledge that belongs to the object language systems of metaphysics and ethics, as opposed to the empirical category to which they see the metalanguage, i.e. epistemology itself, belonging [note].

Here "intuition" is used for the German Anschauung as used by Kant (who says intuitio in Latin) and the Friesians, and it does not mean "intuition" either in the ordinary sense of a spontaneous belief or in the similar philosophic sense -- for which Nelson, familiar with it, uses Intuition, not Anschauung. For some, this may be confusing; but Anschauung is immediate knowledge, while Intuition is not. Intuition may be belief that is prima facie credible, "I trust my intuitions," but it must be tested and examined. As Socrates said to Euthyphro:

Οὐκοῦν ἐπισκοπῶμεν αὖ τοῦτο, ὦ Εὐθυφρον, εἰ καλῶς λέγεται, ἢ ἐῶμεν καὶ οὕτω ἡμῶν τε αὐτῶν ἀποδεχώμεθα καὶ τῶν ἄλλων, ἐὰν μόνον φῇ τίς τι ἔχειν οὕτω, ξυγχωροῦντες ἔχειν; ἢ σκεπτέον, τί λέγει ὁ λέγων;

Then let us again examine that, Euthyphro, if it is a sound statement [εἰ καλῶς λέγεται -- if said well, καλῶς], or do we let it pass, and if one of us, or someone else, merely says that something is so, do we accept that it is so? Or should we examine [ἢ σκεπτέον] what the speaker means [τί λέγει ὁ λέγων -- what the speaker says]?

In Kant the notion of intuition originally seems to be the equivalent of perception and perceptual knowledge [Critique of Pure Reason, Norman Kemp Smith translation, St. Martin's Press, 1965, p. 65]. The conception becomes confused, however, when Kant himself appears to conclude that perception cannot be knowledge, or even perception, without the mental activity of synthesis [ibid., pp. 129-150, the famous "Transcendental Deduction" in the first edition of the Critique of Pure Reason]. The conclusion would reduce "intuition" to no more than a pre-conscious receptivity of the senses.

Intuition as "immediate" knowledge would also thus become impossible, since knowledge would require the mediation of the intellect to become knowledge. Friesian theory accepts Kant's earlier notion of intuition as being immediate knowledge, albeit not conceptually articulated in any way -- concepts are what makes something "mediate." Nelson's point in that regard is that not all knowledge can be mediate, or conceptual, because all conceptual propositions, except tautologies and contradictions, are essentially arbitrary and must, for their truth or falsity to be determined, be referred to some external ground [Nelson, op. cit., p. 120].

But Kant also was not wrong. Perception, as Schopenhauer discerned also, is a product of mental activity. It is just that, contrary to the impression we get from Kant, it is carried out preconsciously. Consciously we are presented with a perception, which appears as a spontaneous intuition, and is treated as such, but which has already been processed by the perceptual mechanisms of the mind or, if we prefer, the brain.

We get indications of this from certain perceptual phenomena, such as a the Gestalt trick shown at right. Alternatively, we see the silhouette of a dark chalice, or of two white faces facing each other. This is already a bit beyond the level of processing for basic perception. Without the concept of a chalice, which people might be missing in many cultures, we would have no sense of recognizing the thing and so "seeing" it. But this clues us in that the conceptual "form" of a chalice has already been built into the perceptual image.

At a lower perceptual level is the stereopticon. There, we look through a viewer and see images, one for each eye. They are slightly different. At some point, sooner or later, the brain gets the trick. The images coalesce to make a single three dimensional image. Just as with the chalice and faces, this transformation is not the result of thought. The brain must do its work preconsciously; when the trick is deciphered, the proper image snaps into view. It can be very dramatic. But we are also being fooled. Whatever the image shows, we are not actually looking out into three dimensional space to see it.

The "external ground" then for perceptual knowledge is immediate knowledge in perceptual intuition, which as such cannot be any kind of belief or thought. Indeed, the most challenging point is that the external ground is in phenomenal objects, which are empirically real and with which we are directly acquainted. This is the Kantian metaphysical challenge to the Cartesian, for whom external objects are not available for direct inspection.

Note that in theories where knowledge does not refer, as in Wittgenstein, but is only self-referential, this phenomenal principle will not be accepted.

In this respect the Friesian theory of truth [Nelson, p. 117] is a combination of traditional correspondence and coherence theories:  coherence in that the conceptual expression and the immediate knowledge both belong to consciousness, and must merely be made to conform to one another; and correspondence because immediate knowledge is a representation of the external world and so, on the principle that our representation contains the objects of our knowledge (phenomenal objects), the external world itself, requiring that the purely mental entity, the belief or the propositional representation, corresponding to the world, must be mediately constructed.

By the principles of the dual nature of representation (that representation is both internal, a mental content, and external, the phenomenal object of our representation) and of ontological undecidability (that we cannot decide whether representation is "really" internal or external) we may consider the Friesian doctrine of truth to be the equivalent of the strongest traditional correspondence theory, that there is an isomorphism between truth in internal representation and states of affairs in the external world.

The difference between intuition and immediate knowledge is that the concept of intuition contains the added feature of immediate awareness -- that the intuitive ground is explicitly present to consciousness. The intuition that we have is perception, and the objects of perception are empirical objects. Since we are ordinarily strongly inclined to believe that knowledge implies awareness of knowledge, it is a very powerful tendency to equate our intuition with our immediate knowledge as such. That gives rise to what Nelson calls [Nelson, "Prejudice of Logical Dogmatism," p.141 and diagram p.146] a "dogmatic disjunction" in the attempt to formulate the nature of the ground of metaphysical knowledge:  that any knowledge is either from intuition or from reflection. This is to say that any case of knowledge is either mediate, involving concepts and thought, where through reflection new knowledge can be generated, or immediate, where all immediate knowledge is intuitive.

At right is a version of Nelson's chart from "The Critical Method and the Relation of Psychology to Philosophy" [Nelson, p.146]. This is one example of Nelson's axiomatic diagrams, which turn out to be equivalent of cubes of opposition, expanding the traditional square of opposition.

Given the "dogmatic disjunction" as the starting point, Nelson sets out a simple axiomatic system to demonstrate the various epistemological approaches to metaphysics [pp. 141-153]. If one accepts (1) the disjunction and also accepts (2) that metaphysical knowledge is possible and that (3) our intuition is empirical, then the only possible conclusion is that the source of metaphysical knowledge is in reflection. For Nelson that is the nature of the traditional system of "dogmatic" or speculative metaphysics.

Those systems may be relatively naive, relying on Euclidean sorts of proofs and "self-evident" premises whose self-evidence remains an unexamined claim, as we see the most starkly in Spinoza, or they may be relatively sophisticated with peculiar doctrines of logic (as with Hegel) to account for the manner in which thought generates new knowledge.

Dogmatic metaphysics is untenable, however, once it is realized that reflection cannot generate knowledge that is not already implicit in its data. Logical derivations and analytic truths are no more than rearrangements of what is given. The whole approach is discredited when we notice that the monism of Spinoza is contradicted by the metaphysical pluralism of Leibniz, when, as Kant would point out, arguments for both are equally good.

The speculative generation of scientific hypotheses escapes the failing of dogmatic metaphysics because scientific method looks to the empirical verification or falsification of the hypotheses. That way is not open, by definition, to metaphysics, except through the equivalent tests of falsification in Socratic Method.

With a new premise that reflection is essentially empty of any new ground or source of knowledge, we cannot accept all of the original three premises of dogmatic metaphysics. Rejecting premise (2) that metaphysical knowledge is possible results in the conclusion of empiricism that all synthetic knowledge is ultimately grounded in empirical intuition. Rejecting premise (3) that all our intuition is empirical results in the conclusion of mysticism that metaphysical knowledge is possible because we possess, or can possess, a special (mystical) intuitive ground for it.

The final alternative, which Nelson calls "Criticism," is to reject premise (1), the "dogmatic disjunction," and conclude that there is a third source of knowledge besides intuition and reflection. Since a division into mediate and immediate is logically exhaustive and we already accept that mediate knowledge, or reflection (except for analytic truths, tautologies) is empty, then there must be immediate knowledge which is not intuitive. This must actually mean that we are unconscious of the non-intuitive immediate ground. The knowledge itself is neither believed nor thought, as such, and it is not explicitly present to us as the table or chair is perceptually.

The "Critical" conclusion tells us nothing positive or definite about what non-intuitive immediate knowledge must be. Even to be legitimately forced to a conclusion that some immediate knowledge is not intuitive obviously does not tell us what it is, and so I characterize this as a merely "negative" theory which must remain inadequate for that reason -- as we are left to wonder what kind of knowledge we could possibly possess without being aware of it.

As I have already noted, however, the conception is by no means new, for it matches one of the most characteristic and important doctrines of Plato:  namely that what we think we know is only opinion and what we really know we actually don't know that we know. Plato's explanation for that condition was also characteristic, and paradoxical, not fitting precisely into either the dogmatic or the mystical categories of Nelson's analysis; for Plato held that our metaphysical knowledge is a momentarily forgotten memory of a prenatal intuition. This is ultimately an appeal to intuition, but in present time it is only an appeal to memory. In his own way Plato thus approximates, with a positive doctrine, the conditions of non-intuitive immediate knowledge:  that it is known but not at first known consciously.

The Friesian Trilemma

With this analysis in hand, Friesian theory can describe all the possible ways that a proposition can be grounded or justified [Nelson, pp. 111-121]. There are three of these, what Karl Popper called Fries's Trilemma [Karl R. Popper, The Logic of Scientific Discovery, Hutchinson of London, 1959, 1977, pp.94,104]:

  1. Proof, which is justification by logical derivation. Tautologies, analytic propositions, can be proven, given the rules of logic, by themselves; all other proofs require premises, which outside of logic are ultimately going to be synthetic.

  2. Demonstration, which is justification by the display of an intuitive ground. In daily life this is the most conspicuous means (apart from arguments from authority), not just of the justification of belief, but of the origin of ordinary knowledge. If you want to know if the window is open, you may need to go look. Curiously, Wittgenstein is big on "showing," where, in his later thought, knowledge is tangled in its own "language game" and the very idea of a ground of truth seems to disappear. Failing conceptual truth, or the ability actually to unambiguously follow a conceptual rule, you may as well just mutely "show" whatever you are talking about, if you can. And,

  3. Deduction (in Kant's legalistic sense [Kant, op. cit., p.120]), which is justification by means of a description of the non-intuitive ground of the belief or proposition. "Deduction" is the peculiar Kantian vehicle for dealing with non-intuitive immediate knowledge, and it is the theoretical heart of Friesian introspective empirical epistemology.

"Proof," "demonstration," and "deduction" are terms that all traditionally mean proof; but Demonstration and Deduction in these new Friesian senses are in no way logical derivations in the object language. Demonstration is merely a showing of the obvious. Where the obvious is no longer present or escapes the nature of our perceptions, then other considerations come into play. Deduction is a showing of the unobvious, but still importantly a showing. What can be shown, leading to question of Deduction, are beliefs whose ground is not properly self-evident or intuitive, although in ethics a moralist may rely heavily on the prima facie truth of his moral "intuitions."

Deduction cannot logically prove the propositions in questions any more than the demonstration of an intuitive ground can. But the cognitive force of each is the same.

Popper himself errs in his evaluation of the trilemma by mischaracterizing the choices and supposing that the cognitive force of each justification is a matter of subjective confidence and certainty. This is what Popper means by the description of Fries's system as "psychologism" [p.94]. Its cognitive certainty is only psychological and subjective, and the result of causation -- which, as Descartes discovered to his distress, is not a cognitive relation.

Popper's sense of the term is different from other uses of "psychologism" for Kant and Fries, against which Nelson wrote his doctoral dissertation. That doctrine is characterized by the principle that the structures of the world are really only structures of the mind, the psyche, which are projected onto the world. The Neo-Kantians thought that the empirical, introspective, and psychological nature of Friesian Critique, a meta-language, meant that the object languages of ethics or metaphysics were also subjective and psychological. Yet this kind of psychologism is precisely what is accepted by Nelson's very own students after his death, thereby totally erasing the value of Nelson's thought. Popper only sees the certainty as projected.

But even this attenuated meaning of "psychologism" is wrong. Neither Kant nor Fries appeals to subjective confidence:

  1. The confidence of Proof is simply, of course, based on the certainty of the laws of logic, which go back to the foundation of non-contradiction. Even logicians and Positivists usually have no difficulty with the certainties of logic (although Hegelians and often radically skeptical or nihilistic deconstructionists may have objections). An interesting challenge was from "Intuitionists" in mathematics, who denied the validity of reductio ad absurdum arguments, even though this would disrupt some of the oldest results in mathematics, like the existence of irrational numbers or the infinitude of prime numbers.

  2. The Kantian confidence in Demonstration, while it may be construed as dependent on the intuitive and so psychological character of an empirical intuition, instead should be seen as based on the empirical reality of the phenomenal objects of perception. Unlike Cartesian epistemology, the objects of perception are not in a transcendent relationship to consciousness. They are immanent, since phenomenal reality is immanent to consciousness -- the Kantian doctrine of "Empirical Realism." So the certainty of "demonstration" is the certainty of Dr. Johnson's table, not that of our inner confidence in our intuitions. If we doubt our perception of the table, the problem is not the loss of our inner certainty, but the very practical embarrassment of a collision. Thus, the appeal is not to subjective confidence, but to the physical resistance of the object. Is the Skeptic willing to die rather than accept the evidence of his senses? The flow of traffic beckons; and the warning of the bystanders -- "Watch out!" -- is not just a language game. Finally,

  3. The Friesian confidence in Deduction cannot be any kind of psychological or subjective certainty when there is not even an original awareness or consciousness of such knowledge. Non-intuitive immediate knowledge is knowledge without "belief." So, where there is no awareness, there is no subjective attitude at all. "Deduction" is not itself the justification of a principle but only the description of its non-intuitive ground.

As we become aware of non-intuitive truths, like the Principle of Causation, or Kant's favorite, the Moral Law, the problem of uncertainty, as with Demonstration, becomes a practical one. If I suppose that this time the fire will not burn me, because, after all, there is no good reason to suppose that it should -- a supposition that the Great Skeptic Hume, of course, would never make -- I will be in for a nasty surprise. [note]

The first questions about non-intuitive immediate knowledge would be how it comes to be consciously known, having been unconsciously known, and then how we know that it is what we think it is. In Kant's classic terms, as we have seen, those are the questions of the quid facti and the quid juris [Kant, p.123]. In academic philosophy, this is the same thing as the issue of "discovery" versus "justification." Nelson loved to quote Carl Gauss (1777-1855), who is supposed to have said, "I have my results, now I just need to prove them." Thus, Fermat's Last Theorem was asserted by Pierre Fermat (1607-1665) in 1637, but it was not proven until 1994, by Andrew Wiles (b.1953).

The quid facti, the conscious possession of the non-intuitive knowledge, is obtained by reflection, specifically by taking our ordinary naive acts of judgment as objects and then by abstracting from them the forms or presuppositions they had unconsciously employed [Nelson, op. cit. "The Regressive Method:  Induction and Abstraction," pp.105-110]. Nelson believed this was the function of Socratic Method, although I think he erred in how he thought that actually worked. Nelson's confidence in this results, as for the Moral Law, was often unwarranted.

Since the presence and focus of consciousness is in its object, the forms or rules by which the object is known, or generated, are themselves not perceived; but taking consciousness itself and its acts as objects can bring those presupposed forms into the objective focus, making possible their entry as such into consciousness. The practice of Socrates himself was to question his interlocutors about their own acts and their own opinions, soliciting a kind of reflection they may never have previously engaged in.

Nelson's theory in this respect is not satisfactory. He thinks that the abstract forms are recognized by a method of "regressive abstraction," but this is really just an appeal to intuitionism and is really no more satisfactory than saying that they are self-evident, once we think about them enough. But since Nelson did think that this occurs through Socratic Method, it is possible to ignore the intuitionistic aspects of his theory and just see Socratic Method, as described above, using the logic of falsification. The method, then, as in science, is to imaginatively construct rules to explain the phenomena and then test their logical consequences against those phenomena -- where the phenomena, of course, are our statements, not our empirical intuitions of the world.

One of the nicest examples, from outside philosophy, of the quid facti is the recognition of grammatical rules of language. Language as an elaboration of consciousness by which objects are conceptually represented and articulated contains many forms that are not intuitively known; and there is no more conspicuous a contrast than in a child between the fearful complexity of rules that are so easily manipulated with respect to their objects yet so securely hidden in themselves. See the discussion of English regular plurals here, where we might think, were anyone to need to be conciously aware of the rules and conditions of grammatical statements, no one would ever be able to say anything. Another sort of conspicuous contrast is when a language teacher insists on the correctness of palpable grammatical archaisms yet usually entirely fails to employ them in ordinary speech. Obtaining the quid facti is simple in principle, but in practice reflection is never as easy as it seems it should be.

It looks like Nelson's conception of Deduction is that it is sufficient to show that the ground of the object language propositions must be non-intuitive [Nelson, op. cit. "Theory of Deduction," pp. 122-125]. That would be only half the answer, however, having said what the ground is not while leaving the question unanswered what the ground is, providing no general theory of the ontology of the non-intuitive ground of various object languages.

A consequence of that is that the various object languages, once identified as such, remain isolated from each other, each a solitary universe of thought maintained solely by Nelson's "self-confidence of reason" [p.126]. The ontological ground of the difference, for the Friesians, seems to be lost in the unknown qualities of things in themselves. The continuation of the theory, therefore, will be in ontology rather than in logic or epistemology. Such questions are treated in the following essay, "The Foundations of Value, Part III, Metaphysical Issues:  The Theory of the Good," and in the separate essay, "A Lecture on the Good."

Non-Intuitive Immediate Knowledge, Ratio, Vol. XXIX, No. 2, December 1987

Epistemology

Value Theory

Home Page

Copyright (c) 1996, 1998, 2001, 2006, 2011, 2012, 2015, 2021, 2022 Kelley L. Ross, Ph.D. All Rights Reserved

The Foundations of Value, Part II, Note 1

"Object languages" are deductive systems (i.e. theorems derived from axioms) which are described by a "metalanguage," i.e. propositions that do not belong to the deductive system but which refer to it.

See Leonard Nelson, "The Verification of Judgments:  Proof, Demonstration, and Deduction," Socratic Method and Critical Philosophy [Dover Publications, 1965, p.153].

It is the most distinctive claim of Friesian epistemology that the propositions constituting the "critique of knowledge," i.e. epistemology itself, are empirical and a posteriori rather than non-empirical and a priori, as are the propositions of ethics and metaphysics.

Return to text

The Foundations of Value, Part II, Note 2;
Epistemological Trilemmas: Popper's Trilemma, the Münchhausen
Trilemma, and the Lockean Trilemma

Οὐκοῦν ἐπισκοπῶμεν αὖ τοῦτο, ὦ Εὐθυφρον, εἰ καλῶς λέγεται, ἢ ἐῶμεν καὶ οὕτω ἡμῶν τε αὐτῶν ἀποδεχώμεθα καὶ τῶν ἄλλων, ἐὰν μόνον φῇ τίς τι ἔχειν οὕτω, ξυγχωροῦντες ἔχειν; ἢ σκεπτέον, τί λέγει ὁ λέγων;

Then let us again examine that, Euthyphro, if it is a sound statement [εἰ καλῶς λέγεται -- if said well, καλῶς], or do we let it pass, and if one of us, or someone else, merely says that something is so, do we accept that it is so? Or should we examine [ἢ σκεπτέον] what the speaker means [τί λέγει ὁ λέγων -- what the speaker says]?

Socrates, "Euthyphro," 9e, by Plato, Euthyphro, Apology, Crito, Phaedo, Phaedrus, translated by Harold North Fowler, Loeb Classical Library, Harvard University Press, 1914, p.34; English, Five Dialogues, Euthyphro, Apology, Crito, Meno, Phaedo, by Plato, translated by G.M.A. Grube, Hackett Publishing, 1981, p.21, notes added.

While Popper provides the name for the "Friesian Trilemma," he does not faithfully present it in the same terms as do the Friesians.

The problem of the basis of experience has troubled few thinkers so deeply as Fries. He taught that, if the statements of science are not to be accepted dogmatically, we must be able to justify them. If we demand justification by reasoned argument, in the logical sense, then we are committed to the view that statements can be justified only by statements. The demand that all statements are to be logically justified (described by Fries as a 'predilection for proofs') is therefore bound to lead to an infinite regress. Now, if we wish to avoid the danger of dogmatism as well as an infinite regress, then it seems as if we could only have recourse to psychologism, i.e. the doctrine that statements can be justified not only by statements but also by perceptual experience. Faced with this trilemma -- dogmatism vs. infinite regress vs. psychologism -- Fries, and with him almost all epistemologists who wished to account for our empirical knowledge, opted for psychologism. In sense-experience, he taught, we have 'immediate knowledge': by this immediate knowledge, we may justify our 'mediate knowledge' -- knowledge expressed in the symbolism of some language. And this mediate knowledge includes, of course, the statements of science. [the Logic of Scientific Discovery, Hutschinson of London, 1959, 1977, pp.93-94, color added]

The implication of Popper's "Trilemma" is that two alternatives, "dogmatism" and "infinite regress," are bad, leaving "psychologism" as the only alternative. This is very different from what I have presented above. "Proof," "demonstration," and "deduction" are all legitimate forms of justification, but they work with different kinds of knowledge, or they have different limitations. Also, Popper rejects all the alternatives he presents, since he also rules out "psychologism," as he defines it.

Looking at Popper's "Trilemma," "dogmatism" is not even a form of justification, unless it is about propositions that are self-evident. Popper does not make this clear. "Dogmatism" is what we would have if Euthyphro answers the question posed by Socrates above that, yes, Socrates should just accept what he says. But not even Euthyphro wants to go that far. So "dogmatism" is not a serious alternative about justification; and Popper's treatment is not honest if he doesn't consider that Aristotle and the Rationalists only sound "dogmatic" because they believe that some propositions are self-evident.

An "infinite regress" is not a serious alternative for justification either. It is in fact, as it was for Aristotle, the reductio ad absurdum of the principle that all propositions are to be justified by logical proof. An infinite regress is, by definition, impossible. It only turns up as remotely legitimate when someone like Richard Rorty (1931-2007) has the idea that we continue with the regress just until we get tired or run out of ideas. But that actually isn't really a justification -- although it may describe how a regress would actually work in most cases.

That leaves what Popper called "psychologism," which he says is "the doctrine that statements can be justified not only by statements but also by perceptual experience." He follows up that "In sense-experience, he taught, we have 'immediate knowledge'." He does make a proper distinction between immediate knowledge and "mediate" knowledge, which uses language and its concepts.

So the question is what it meant by "immediate knowledge." Popper sees it in subjective terms, as consisting of "sense-experience," which is why a term like "psychologism" is used. The certainty of "immediate knowledge" is a feature of the mind, of the psyche. Thus, Popper's version of the "Friesian Trilemma" ends up like the sets of dilemmas I examine below, which are offered by Skeptics to refute the possibility of knowledge. Popper is not a Skeptic, so he has his own idea of a legitimate alternative (using falsification), but his presentation of the Friesian theory is badly confused and distorted.

Thus, does a Friesian or a Kantian appeal to experience constitute some kind of subjective "psychologism"? No, because Kantian experience has two sides: (1) the subjective side of perception as a mental content; and (2) the objective side of perception which is the real presence of phenomenal objects, which is why Kant calls his theory a form of "Empirical Realism." Popper has entirely missed the dimension of "Empirical Realism." We are directly acquainted with real objects, which is something that Descartes or Berkeley would not say. This was well understood by Schopenahauer, who said, Die Welt ist meine Vorstellung, "The world is my representation."

Popper's "basic statements" are caused, which is not a cognitive relationship, as Popper recognizes himself. But this is something that made so much trouble for Descartes, the source of the actual Problem of Knowledge, where Descartes worried that his beliefs could be caused by the Deceiving Demon.

What is our position now in regard to Fries's trilemma, the choice between dogmatism, infinite regress, and psychologism? The basic statements at which we stop, which we decide to accept as satisfactory, and as sufficiently tested, have admittedly the character of dogmas, but only in sof ar as we may desist from justifying them by further arguments (or by further tests). But this kind of dogmatism is innocuous since, should the need arise, these statements can easily be tested further. I admit that this too makes the chain of deduction in principle infinite. But this kind of 'infinite regress' is also innocuous since in our theory there is no question of trying to prove any statements by means of it. And finally, as to psychologism, I admit, again, that the decision to accept a basic statement, and to be satisfied with it, is causally connected with our experiences -- especially with our perceptual experiences. But we do not attempt to justify basic statements basic statements by these experiences. Experiences can motivate a decision, and hence an acceptance or a rejection of a statement, but a basic statement cannot be justified by them -- no more than by thumping the table. [ibid, p.105, boldface added]

But, using falsification, Popper doesn't really mean that his "basic sentences" are "dogmas," because they can be "tested further." The "testing," of course, does not actually justify or or verify them. The "infinite regress," in turn, simply means that testing must always continue, because the "testing" doesn't accomplish a proof of them.

The real trouble, as we might expect, comes with the "psychologism" alternative. If the "decision to accept a basic statement" is "causally connected with our experiences," then this means that there is no free will or choice involved, and the relationship is not necessarily related to truth. But the relationship of a proposition to its reference is intentional, not causal. An effect does not refer to its cause, but an intention does refer to its object. Materialists don't like intentionality; and Wittgenstein rejects external or objective reference in favor of the autistic self-reference of "language games." This comes up in the significant debate between C.S. Lewis and Elizabeth Anscombe, where Anscombe commits the sophistry of substituting intentional reference (despite her closeness to Wittgenstein) for the causal connection only allowed by Determinism, in Lewis's critique.

Popper has the same problem with his admission that "Experiences can motivate a decision, and hence an acceptance or a rejection of a statement," which he admits is not a justification; but then he overlooks the circumstance that such a "motivation" is cognitively irrelevant to the truth, falsehood, or even meaning of a proposition referring to experience. If my "basic sentence" is caused by my experience, how do I know that my representation has anything to do with the objects of the experience? Without intention, there is no cognitive representation at all.

This is part of the muddle that is Popper's conception of "psychologism" and his misconstruction of Fries's epistemology. What does justify a proposition that refers to experience is the matter of fact about the objects of that proposition. But Popper's discussion, innocent of intention, makes it look like he is still vulnerable to the dilemma of Descartes.

The absence of any cognitive ground for Popper's "basic sentences" has struck many as evasive and unsatisfying, i.e. "it does not give any positive reasons for believing in the theories which have survived severe tests." Since Popper generalizes falsification into all knowledge, nothing will be justified by "any positive reasons."

Kantian immediate knowledge is not some kind of subjective certainty, let alone one that is caused by experience. It is the presence of the objects that provide the ground of empirical propositions. If we wonder whether the windows are open, or the doors are locked, we can go look. This is a fallible and corrigible procedure, as our perceptions can be mistaken; but we do not correct any mistakes by some internal examination, but by returning to the source, i.e. checking the windows and the doors again.

We might say that Popper has missed the metaphysical aspect of Kantian epistemology, and his error snowballs into the grave misunderstandings we see, for instance, in Leonard Nelson's student, Grete Henry-Hermann.

In a footnote to the last quote given, Popper lets us know where he puts himself in the history of philosophy:

It seems to me that the view here upheld is closer to that of the 'critical' (Kantian) school of philosophy (perhaps in the form represented by Fries) than to positivism. Fries in his theory of our 'prediliction for proofs' emphasizes that the (logical) relations holding between statements are quite different from the relation between statements and sense experiences; positivism on the other hand always tries to abolish the distinction: either all science is made part of my knowing, 'my' sense experience (monism of sense data); or sense experiences are made part of the objective scientific network of arguments in the form of protocol statements (monism of statements). [ibid, note]

It seems that Popper, because he was from Vienna, was often confused with the Vienna Circle and so with the Logical Positivists. Since Popper wasn't a Positivist, and he rejected most of their principles, this must have been annoying. From this footnote, we get a hint that he had an extensive critique of the Positivists in mind.

His feeling of an affinity for Fries must have been because of Fries's view that the axioms of metaphysics or geometry, or the laws of physics, need not and could not be logically proven. The Positivists, on the other hand, had a principle that propositions were not even meaningful unless they could be "verified" with "sense data." This is absurd, especially when these were often people who were, or should have been, aware of Hume's unchallenged argument that universal propositions cannot be "verified" at all by experience -- not to mention that "Verificationism" itself cannot be "verified" by "sense data." So Logically Positivism was itself trivially incoherent -- nothing but "nonsense" on its own principles.

As we have seen on this page, Popper seems to have had a poor grasp of the principles by which the Friesians believed justification could occur in the absence of logical proof. However, for the practice of science, Popper's misunderstand is, as he might say, "innocuous," since falsification, not verification, is his principle of scientific method. Fries could have no objection to that.

Someone else at odds with the principles of Logical Positivism was Kurt Gödel (1906-1978). Yet Gödel attended actual Vienna Circle meetings, by invitation. Nevertheless, he sat quietly despite disagreeing with almost everything they said. Gödel was a Platonist. The Positivists, of course, invited Gödel because they were in awe of him. Gödel was silent because he was intimiated by the philosophers and did not have the confidence to voice his opinions. The Positivists may have lived out their lives without ever realizing how much he disagreed with them. Karl Popper had no such inhibitions.

Karl Popper (1902-1994)

History of Philosophy, Modern

Epistemology

The Münchhausen Trilemma

In modern Epistemology, we hear about some things superficially similar to the Friesian Trilemma. This was evidently proposed by German philosopher Hans Albert (b.1921) and is called the "Münchhausen Trilemma" or "Agrippa's Trilemma."

The "Agrippa" in this case is Agrippa the Skeptic, who lived in the first century AD and who adduced five arguments against certainty:

  1. Dissent -- there are disagreements even between authorities and experts;
  2. Infinite Regress -- proof can be demanded for the premises of any proof;
  3. Relation -- i.e. Protagorean Relativism, that the character of things is relative;
  4. Assumption -- that premises of proofs are suppositions; and
  5. Circularity -- that arguments may beg the question, i.e. assume what is to be proven.

I cannot tell from such arguments against certainty whether Agrippa was a Pyrrhonian or an Academic Skeptic. The idea of circularity seems to underlie the characterization of Albert's trilemma as the "Münchhausen Trilemma," since Baron Münchhausen is supposed to have pulled himself and his horse out of a swamp by pulling up on his own hair.

Albert's trilemma is simpler than Agrippa's, but it also has a Skeptical motivation. It is stated in terms of three forms of logical justification:

  1. Circularity;
  2. Infinite Regress; and
  3. Regress to Axioms.

Circularity, of course, is an out and out logical fallacy, the formal fallacy, of "Begging the Question" or Petitio Principii. If you can assume what is to be proven, you can literally prove anything. With ease. This cannot be taken seriously as an epistemological principle, and of course it is unrelated to the Friesian Trilemma, in which all the branches are themselves forms of genuine justification.

Infinite Regress is not a fallacy but a problem. The regress cannot be completed, which means that the justification cannot be accomplished. One's reasons will usually be exhausted pretty quickly, and some philosophers may be satisfied with that; but there is no pretending that this really proves anything. Indeed, the exhaustion of reasons contradicts the principle of an infinite regress, since the procedure of the regress, to question every new set of premises, is abrogated. The regress, let alone the infinite regress, is abandoned. But if Circularity and Infinite Regress are forms of justification meant to undermine certainty and establish Skepticism, we may well see them as adqeuate for that purpose.

Finally, Regress to Axioms simply returns us to Aristotle's original treatment of the Regress of Reasons. Albert's own Skepticism appears to rest on questions about the certainty or justification of axioms. While Albert is correct to question Aristotle's solution of self-evidence as the justification of First Principles, he seems to question, not that such propositions actually are self-evident, but that they would still be inadequate even if they were, since they would somehow violate the Principle of Sufficient Reason.

This is hard to credit. The "reason" behind self-evident First Principles is, indeed, their self-evidence. On his argument, Albert would need to question the axioms of logic, which are true because of their own form. And perhaps he does. But this would be a profoundly nihilistic Skepticism indeed, far beyond Hume. Instead, we may question self-evidence because of the inherent difficulty of knowing that something, which may not be true because of its form, is indeed self-evident. Agrippa's principle of "Dissent" will do the job. Material self-evidence relies on a subjective certainty, just as in Popper's confused accusations against Kant and Fries. This is undermined by the disagreement of authorities and experts. How does one adjudicate their disagreements?

That is the downfall of Aristotle's solution to the Problem of First Principles. Since Albert's other examples of justifications for axioms, of common sense or authority, are logical fallacies again, informal fallacies (the Argumentum ad Populum and Argumentum ab Auctoritate, respectively), they are straw men.

There are two solutions to the Regress to Axioms. Popper's solution is that the axioms are tested in science by falsification, which means that we do not need to verify or justify the axioms in any other way. We simply expect that false axioms will eventually result in false predictions and so, as a practical matter, they will be taken care of. The other solution, of course, is the Kant-Friesian doctrine presented here, of Proof, Demonstration, and Deduction. This is just as open-ended, in its own way, as Popper's solution, but it generalizes the test to falsification under Socratic examination. The quid facti is always subjects to new tests.

History of Philosophy, Hellensitic

Epistemology

The Lockean Trilemma

The Friesian Trilemma may be compared with a similar classification in John Locke. Locke distinguished "three degrees of Knowledge, viz. Intuitive, Demonstrative, and Sensitive" [An Essay Concerning Human Understanding, Book IV, Chapter II, §14, "Sensitive Knowledge of particular Existence"]. In this system, "intuitive" knowledge is self-evident, by which, in Locke's own example, we know of our own existence, with a Cartesian certainty. "Demonstrative" knowledge is that which can be established by logical proof, by which Locke believed that the existence of God could be established. Finally, "sensitive" knowledge is that by which through perception we are acquainted with "the particular existence of finite Beings." This consitutes Locke's answer to Descartes, in the sense that, considering our relation to external objects, "some Men think there may be a question made." Locke allows that the certainty of our perceptual knowledge of the world is not as great as with intuitive or demonstrative certainty, but it is nevertheless substantial and sufficient for both practical and philosophical purposes.

Locke is unable to sustain a distinction between intuitive and demonstrative certainty, for he has neglected to notice that proofs are only as sound as their premises, which reduces the level of demonstrative certainty to that of the premises of the proofs. This leaves only the alternatives of intuitive or sensitive certainty for them. It then does not help that his example of demonstrative certainty, namely his own proof for the existence of God, suffers from a trivial logical fallacy (as examined on the page for Locke). Locke provides us with no criterion for intuitive certainty, but there is no impediment to anticipating the logical criterion of Hume and Kant that such truths ("relations of ideas" and "analytic" propositions) cannot be denied without contradiction. This is itself demonstrable through indirect or reductio ad absurdum proofs, where the denial of what is to be proven implies what is to be proven (with classic examples, known since the Greeks, such as the proof that the square root of two is not a rational number).

We now might see about how Locke's categories match up with alternatives in the Friesian Trilemma. This works imperfectly, because they are really not quite about the same things. "Intuitive" and "demonstrative" certainty might be associated with Friesian "Proof," which encompasses logical justification. In the Friesian category of Proof, however, we are not embarrassed that logical proofs are only as certain as their premises, since the Friesian Trilemma is not about degrees of certainty but about procedures of justification. A proposition may be justified by its deductive premises, whatever the status of those premises. When we then seek justification for the premises, Proof will also cover those premises that are logically or analytically true, as can be ascertained through indirect proof.

If the premises of our proofs are not analytic, then they are synthetic, and can be denied without contradiction. Locke's "sensitive" level of certainty will then apply to the Friesian procedure of justification in "Demonstration," by which an intuitive ground is indicated in perception. We may now say that a Cartesian level of certainty in this case is compromised by the fallibility of our perceptions. However, this does not discredit the procedure of justification or its customary degree of certainty; for as our perceptions are fallible, they are also corrigible, and by the same procedure.

Thus, if we come to doubt the nature of our judgment about some empirical object, perhaps in being contradicted by some other observer, we might consult other observers, or we might directly reexamine the object, if this is possible. A considerable amount of progress in science is achieved by the simple expedient of reexamining the phenomena, as when Philoponus and Galileo tested Aristotle's assertions about falling bodies. That Philoponus could not be certain that bodies of different weights were not falling at the same rate meant that more careful exmaination of the phenomenon was in order, while the lack of an observable difference required some reflection on the laws of nature that must be behind such a result.

This is also part of the trivial procedures of daily life, as when we come to suspect that there is some foreign object in the soup. An appeal to the cook may be in order, but the truth of the matter cannot be established by authority alone -- especially as we must take into account the bias of the cook to affirm the unadulterated character of his product. If we produce an offending fly, the cook may accuse us of fraud.

For the Friesian category of "Deduction," Locke has no corresponding level of certainty. His failure to apply logical criteria to his characterizations of "intuitive" and "demonstrative" certainty precludes an understanding of the problem of synthetic a priori propositions, even as the failure of many modern philosophers to attend to Hume's own logical criteria precludes their understanding of Kant's response to Hume. The Friesian theory of non-intuitive immediate knowledge is thus even further removed from informed consideration.

History of Philosophy, Modern

Epistemology

Return to text

The Foundations of Value, Part III

Metaphysical Issues: The Theory of the Good

after Plato, Neoplatonism, Kant, & Zoroastrianism,


Bonum et ens sunt idem secundum rem.
The Good and Being are really the same thing.

St. Thomas Aquinas, Summa Theologica, Part I, Question 5, Article 1.

G.E. Moore is famous for claiming (in Principia Ethica) that the good cannot be defined. Robert Pirsig, in Zen and the Art of Motorcycle Maintenance, says something similar but then later has to admit that he says quite a bit about what the good ("Quality") is. Nevertheless, what he says about the good is less a definition than a description of its reality:  he thinks that Quality is a deeper level of reality (like the Tao) than the world as we see it.

That approach to the good, however, goes all the way back to Plato. In the Republic Plato distinguishes two levels of reality, the World of Becoming (transient and imperfect), which we see, and the World of Being (eternal and unchanging), which we don't see. Although Plato deliberately avoids giving a definition, he says that the good is to the world of Being what the sun is to the visible world -- the source of all knowledge and even existence. Plato thus does not give us a literal account but only a simile or analogy.

Several centuries after Plato, the Neoplatonists said that the good is simply Being itself (or beyond Being) and the source of all existence. The Neoplatonists used Plato's metaphor of the sun, claiming that all existence radiates from the good the way light radiates from the sun -- and that evil is the outer darkness of non-existence.

In modern philosophy, Immanuel Kant did not speculate about the relation of value to being, but he did distinguish two levels of reality as Plato did:  the world as it exists in our perception ("appearances" or phenomena), and the world as it exists apart from our perception or in itself (things-in-themselves or noumena). Kant thought that all our knowledge was about phenomena, except for moral knowledge, which was based on things-in-themselves. Kant didn't think we could know how that worked, but, rather like Hume, he took it as a given.

This all certainly makes it sound like the word "good" is hard to define, but that is actually wrong if what we mean by good is the common sense of an instrumental good. An instrumental good is something that is good for something, as a light bulb is good for providing light. In that sense, "good" can be exhaustively defined:  the instrumentally good is that which is sufficient to its purpose. Saying that a light bulb is good for providing light is the same as to say that the purpose of a light bulb is to provide light and, perhaps, it accomplishes that purpose to a greater extent than other things, like candles, that provide light.

Many ordinary definitions of "good" suffer from the difficulty of a circular definition by merely replacing "good" with a equivalent value term, like "value" itself, or "beneficent," which comes from the Latin word for "good," or "moral order," all of which simply throw us back to the larger question about the nature of value, which is what our question about the good was really about in the first place.

The idea of purpose adds a completely different dimension to the question about the good. Note that there are two possible instrumental judgments about light bulbs:  a light bulb may be a good light bulb, i.e. it does what a light bulb is supposed to do, and a light bulb is a good light source, i.e. it does what any light source is supposed to do. A good hammer does what a hammer is supposed to do, drive nails. Even driving nails "well" can be given a purposive definition:  the nails are expected to go in straight, quickly, and leave the nailhead flat on, or just in, the wood. A good car does what a car is intended to do -- get you where you want to go reliably and with some degree of comfort.

If these things are not adequate to their purpose, we say that there is something "wrong" with them. If what is wrong can be set right, then they can continue being good examples of their kind. If what is wrong cannot be set right, then they become bad examples of their kind, a bad hammer or a bad car. We discard them. A car that comes defective from the vendor is a "lemon."

Instrumental goods do not create the greatest difficulties with defining the good. There are other goods that are good in themselves. They are not good for anything. These are the ultimate goals or ends of mere instrumental goods, which otherwise would generate a infinite regress of ends. They are intrinsic, rather than instrumental, goods; and intrinsic goods are what seem to resist definition.

Intrinsic goods still maintain an obvious relation to purposes, since, as Aristotle puts it, "the good is that at which all things aim." The question, however, which Aristotle did not ask, is why things aim at these ends. The good is not good because things aim at it. Instead, things aim at the good because it is good. So why is an intrinsic good, good? That is the kind of question to which Moore decided there was no answer, and to which Plato etc. could only provide a metaphorical answer. All in all, this is rather odd. How can an instrumental good be so easily defined but an intrinsic good not at all?

However, everyone may have gone looking far and wide for the answer that was at hand all along. If an instrumental good is what serves its purpose, an intrinsic good may do the same. The only difference is going to be that an intrinsic good will be its own purpose. Just as an instrumental good is the means to an end, where the end is distinct, an intrinsic good is a means to an end where it is an end in itself.

Aristotle even had a word for this:  ἐντελέχεια, "entelechy," meaning "actuality," "fulfillment," or "having the end within." Since instrumental goods always serve some end, intrinsic goods prevent that process from continuing forever. This definition is adequate formally, but it is not adequate intuitively:  it is still hard to say what this is supposed to mean or why it would be a good definition (instrumentally) of the good. What is intuitive about this view is that it maintains the connection between the good and purpose. A deeper understanding of intrinsic goods thus must come with further reflection on the nature of purposes.

A purpose is normally about something in the future. If the purpose of the hammer is to drive nails, we usually pick up the hammer with the intention that we are going to drive nails with it. Once the nails are driven, the hammer is returned to irrelevant storage. The hammer becomes irrelevant because the driven nails stand independent of the hammer. An end in itself, however, would not be separate from its purpose. There is no futurity, just presence. We do not pick up the end in itself with the intention of achieving its purpose in the future. We pick up the end in itself because the purpose is already achieved therein, just as we watch a movie in great measure for the end in itself of being entertained.

Nevertheless, there must be some sense that the purpose of the thing, like the value of the thing, is distinct from the factual instrumentality of the thing. Otherwise fact and value would simply be identical, which they are not. The uniqueness of an intrinsic good is that fact and value are equated, but they are still not the same thing. Otherwise we would not need to identify a thing as intrinsically good of its kind. But how can one and the same thing contain a factual presence and something that corresponds to the futurity of an instrumental good? That is where the two levels of reality, after the fashion of Plato and Kant, come in.

Hume's distinction between fact and value puts value at a grave disadvantage. Facts seem to be based on experience and on the world, while value doesn't seem to be based on anything of the sort -- indeed, Hume's distinction is usually used to dismiss any foundational existence of value. What is is there for all to see, but what ought to be doesn't need to exist at all. Since Hume himself was a subjectivist, the obvious thing to say might be that value is based on nothing but our own feelings and so doesn't exist separate from us. Hume's subjectivism, however, is a theory of how value is known, not what value is. Hume didn't think that we could say what the basis of value was any more than he thought we could say what the basis of matters of fact was. Hume's scepticism did not mean he doubted that matters of value were there any less than matters of fact were there. He just didn't think we could understand how.

Identifying the good with being, like Plato, the Neoplatonists, or Pirsig, certainly reverses the disadvantage that value has vis à vis fact, but it doesn't explain how things which ought to be but don't exist have some relation to existence. Distinguishing two levels of reality, like Plato or Kant, can divorce fact from value, but it just seems to introduce two different kinds of existence, which sounds like an ad hoc, arbitrary sort of solution. Updating Kant a bit, however, can take care of this.

The kind of existence that we experience is not the same as the existence of some external object like a rock. What exists in the rock cannot be destroyed. In modern physics this is called the Conservation of Mass (or Mass-Energy). In Greek philosophy, Parmenides claimed that existence could not be destroyed because non-existence or Nothing was "altogether unthinkable." The Indian classic, the Bhagavad Gita, agreed with this, that "the unreal never is; and Real never is not."

But our existence, as we know well, is vulnerable to non-existence. That is called "death." Death, however, does not mean the non-existence of our bodies. It doesn't even necessarily mean the death of our bodies. It means the end of consciousness, since the kind of existence that we experience is the existence of consciousness. Consciousness, in turn, is a very peculiar sort of thing.

In modern philosophy, Edmund Husserl (following Franz Brentano) has said some of the most interesting things about consciousness:  and one thing he says is that consciousness is always consciousness of something. Consciousness always has an object:  it is always a relationship between subject and object. This is called the "intentionality" of consciousness. In Indian philosophy, the Brhadaranyaka Upanisad says something similar, that knowledge always involves a Knower and a Known -- but the Knower is not Known as Knower. If the subject becomes an object, to be known, then it is not the subject anymore; but the subject is still there, knowing the object.

The contents of consciousness exist by virtue of the existence of the subject. The subject, however, is not known as such by the contents of consciousness. The contents are spontaneously projected onto intended objects. Objects are known by means of the contents of consciousness. This divorces consciousness from the real existence either of subject or object:  What exists in the subject is projected onto objects, divorcing us from the subject (the Knower). But we only know objects by those subjective contents, leaving us still divorced from the existence of external objects as such.

René Descartes got tangled up in this relationship and decided that we couldn't be sure that external objects even existed. The whole world could be a hallucination (the possibility of solipsism -- I alone exist). Descartes felt sure that he existed (his consciousness does exist) and so thought that the mind is better known than the body. But consciousness divorces us from the existence of the subject just as much as it divorces us from the existence of the body (an external object). Hume and Kant both saw that any uncertainty we have about external existence (the body) must be balanced by uncertainty about internal existence (the mind).

The strangeness of consciousness enables us to combine Plato, Kant, etc. into a theory of the good. We can say that existence is value, being is the good, but that this identity and symmetry is broken by consciousness, which is the kind of existence that we possess. The asymmetry that results is the difference between subject and object on one side and between is and ought on the other. Consciousness divorces its contents from existence as such, either as subject or as object. That makes consciousness precarious:  it makes death, the non-existence of consciousness, possible for us. The existence that we enjoy is a kind of reflection of existence proper. The reflection gives us matters of fact; but our real, direct perception of unreflected being is through matters of value:  Value is the shadow that being casts into phenomena.

The role of death suggests a parallel with another ancient conception of the good:  Zoroastrianism, the ancient religion of Iran, founded by the prophet Zarathushtra -- Zoroaster in Greek -- Zardašt, Zardošt, Zardohašt, Zarâdošt, and other pronunciations in Arabic or Modern Persian. (Nietzsche wrote it "Zarathustra" in German; writing it "Zarathushtra" with an English pronunciation is quite close to the original.) Zoroastrianism basically identified life with the good and death with evil. Things like violence and pain are then evil through their relationship with death:  pain is the body's way of telling us that damage and death threaten it; and violence either threatens to create pain or to effect death itself. If that is a good identification, then consciousness creates the realm of good and evil, as it does the realm of life and death.

In the Symposium Plato says that the only form of value we can see is beauty, and he thought that appreciating beauty would lead us on to insight into the World of Being. Indeed, we can see beauty. At the same time we are aware that beauty is not the same thing as the factual or scientific attributes of an object. Thus we say that "beauty is in the eye of the beholder." But it isn't.

Beauty is in the object, and it is in beauty that we see through the factual reflection of reality into Being itself. Indeed, beauty is relativistic and cannot be reduced to a conceptual system (de gustibus non est disputandum -- "there is no disputing taste"), but other kinds of value, like morality, can be better, even absolutely, conceptualized. They actually are all kinds of beauty. Morally good things may even be more beautiful than the beauties of nature or art, even if it is more difficult to perceive them as such. As Kant says:  "Two things fill the mind with ever new and increasing admiration and awe, the oftener and more steadily we reflect on them:  the starry heavens above me and the moral law within me." These are cases of the external and internal good and beautiful.

Contemplating the beauty of nature may raise the question, as it did for Kant, whether the world has any purpose. Is the world, or life, for anything? This is the same as to ask whether any natural objects, apart from our purposes, are instrumental goods. However, trying to conceive of natural objects as instrumental goods leads into the Antinomies of purpose, suffering, etc. Teleological explanations are properly excluded from science, which deals with natural objects only as the products of causation.

We must remember, however, that the rejection of teleological explanations merely means the rejection of natural objects as instrumental goods. Purpose is still part of the world when we regard natural objects as intrinsic goods. Indeed, all natural goods are intrinsic goods. There are no natural instrumental goods. Thus, no good thing apart from human purposes can be for anything except itself. Whatever science says about the "starry heavens above me," they still can bespeak the very things that Plato or Kant thought that they did.

A Lecture on the Good

A New Kant-Friesian System of Metaphysics

A Deuteronomy of Kant-Friesian Metaphysics

Metaphysics

Home Page

Copyright (c) 1996, 2001, 2006, 2010, 2021 Kelley L. Ross, Ph.D. All Rights Reserved