Relativism

The first clear statement of relativism comes with the Sophist Protagoras, as quoted by Plato, "The way things appear to me, in that way they exist for me; and the way things appears to you, in that way they exist for you" (Theaetetus 152a). Thus, however I see things, that is actually true -- for me. If you see things differently, then that is true -- for you. There is no separate or objective truth apart from how each individual happens to see things. Consequently, Protagoras says that there is no such thing as falsehood. Unfortunately, this would make Protagoras's own profession meaningless, since his business is to teach people how to persuade others of their own beliefs. It would be strange to tell others that what they believe is true but that they should accept what you say nevertheless. So Protagoras qualified his doctrine: while whatever anyone believes is true, things that some people believe may be better than what others believe.

Plato thought that such a qualification reveals the inconsistency of the whole doctrine. His basic argument against relativism is called the "Turning the Tables" (Peritropé, "turning around") argument, and it goes something like this: "If the way things appear to me, in that way they exist for me, and the way things appears to you, in that way they exist for you, then it appears to me that your whole doctrine is false." Since anything that appears to me is true, then it must be true that Protagoras is wrong [1]. Relativism thus has the strange logical property of not being able to deny the truth of its own contradiction. Indeed, if Protagoras says that there is no falsehood, then he cannot say that the opposite, the contradiction, of his own doctrine is false. Protagoras wants to have it both ways -- that there is no falsehood but that the denial of what he says is false -- and that is typical of relativism. And if we say that relativism simply means that whatever I believe is nobody else's business, then there is no reason why I should tell anybody else what I believe, since it is then none of my business to influence their beliefs.

So then, why bother even stating relativism if it cannot be used to deny opposing views? Protagoras's own way out that his view must be "better" doesn't make any sense either: better than what? Better than opposing views? But there are no opposing views, by relativism's own principle. And even if we can identify opposing views -- taking contradiction and falsehood seriously -- what is "better" supposed to mean? Saying that one thing is "better" than another is always going to involve some claim about what is actually good, desirable, worthy, beneficial, etc. What is "better" is supposed to produce more of what is a good, desirable, worthy, beneficial, etc.; but no such claims make any sense unless it is claimed that the views expressed about what is actually good, desirable, worthy, beneficial, etc. are true. If the claims about value are not supposed to be true, then it makes no difference what the claims are: they cannot exclude their opposites.

It is characteristic of all forms of relativism that they wish to preserve for themselves the very principles that they seek to deny to others. Thus, relativism basically presents itself as a true doctrine, which means that it will logically exclude its opposites (absolutism or objectivism), but what it actually says is that no doctrines can logically exclude their opposites. It wants for itself the very thing (objectivity) that it denies exists. Logically this is called "self-referential inconsistency," which means that you are inconsistent when it comes to considering what you are actually doing yourself. More familiarly, that is called wanting to "have your cake and eat it too." Someone who advocates relativism, then, may just have a problem recognizing how their doctrine applies to themselves.
"Here again we see the contrast
between a long history of struggling
with difficult logical issues and the
assertion by the race-gender-class
critics of a logically unsophisticated
position that is immediately contradicted
by their own actions. Although
theoretically against judgments of
literary value, they are, in practice,
perfectly content with their own;
having argued that hierarchies are
elitist, they nonetheless create one by
adding Alice Walker or Rigoberta
Menchu to their course reading lists.
They vacillate between the rejection of
all value judgments and the rejection of
one specific set of them -- that which
created the Western canon."

John M. Ellis, Literature Lost
[Yale University Press, 1997], p. 197

This problem turns up in many areas of dishonest intellectual or political argument, as in the box quote.

Modern relativists in philosophy, of course, can hardly fail at some point to have this brought to their attention. The strongest logical response was from Bertrand Russell, who tried to argue that nothing could logically refer to itself (called his "Theory of Logical Types" [2]). That was a move that defeated itself, since in presenting the Theory of Types, Russell can hardly avoid referring to the Theory of Types, which is to do something that he is in the act of saying can't be done or that doesn't make any sense [3]. In general, one need merely consider the word "word" and ask whether it refers to itself. Of course it does. The word "word" is a word. Other modern relativists in philosophy (e.g. Richard Rorty) try to pursue Protagoras's own strategy that their views are "better" rather than "true." Rorty sees this as a kind of Pragmatism, which is not concerned with what is true but just with what "works."

Pragmatism is really just a kind of relativism; and, as with Protagoras's own strategy, it is a smoke screen for the questions that ultimately must be asked about what it means that something is "better," or now that something "works." Something "works," indeed, if it gets us what we want -- or what Richard Rorty wants. But why should we want that? Again, the smoke screen puts off the fatal moment when we have to consider what is true about what is actually good, desirable, worthy, beneficial, etc. All these responses are diversions that attempt to obscure and prevent the examination of the assumptions that stand behind the views of people like Rorty. It is easier to believe what you believe if it is never even called into question, and that is just as true of academic philosophers like Rorty as it is for anybody else. Being intelligent or well educated does not mean that you are necessarily more aware of yourself, what you do, or the implications of what you believe. That is why the Delphic Precept, "Know Thyself" (Gnôthi seautón) is just as important now as ever.

Relativism turns up in many guises. Generally, we can distinguish cognitive relativism, which is about all kinds of knowledge, from moral relativism, which is just about matters of value. Protagoras's principle is one of cognitive relativism. This gives rise to the most conspicuous paradoxes, but despite that there are several important forms of cognitive relativism today: historicism is the idea that truth is relative to a given moment in history and that truth actually changes as history does. This derives from G.W.F. Hegel, although Hegel himself thought there was an absolute truth, which would come at the "end of history" -- where he happened to be himself, curiously. This kind of historicism was taken up by Karl Marx, who thought that every kind of intellectual structure -- religion, philosophy, ethics, art, etc. -- was determined by the economic system, the "mode of production," of a particular historical period. A claim to truth about anything in any area could therefore be simply dismissed once its economic basis was identified: labeling something "bourgeois ideology" means that we don't have to address its content. Like Hegel, however, Marx did think there was an absolute truth at the "end of history," when the economic basis of society permanently becomes communism. Modern Marxists, who don't seem to have noticed the miserable and terrible failure of every attempt to bring about Marx's communism, can hardly do without their absolutizing "end of history" [4]; but modern Hegelians (e.g. Robert Solomon) can create a more complete relativism by removing Hegel's idea that there is an "end" to history. Unfortunately, that creates for them the typical relativistic paradox, for their own theory of history no longer has any basis for its claim to be true for all of history. Hegel didn't make that kind of mistake.

Another modern kind of cognitive relativism is linguistic relativism, that truth is created by the grammar and semantic system of particular language. This idea in philosophy comes from Ludwig Wittgenstein, but it turns up independently in linguistics in the theory of Benjamin Lee Whorf. On this view the world really has no structure of its own, but that structure is entirely imposed by the structure of language. Learning a different language thus means in effect creating a new world, where absolutely everything can be completely different from the world as we know it. Wittgenstein called the rules established by a particular language a "game" that we play as we speak the language. As we "play" a "language game," we indulge in a certain "form of life." [5]

In linguistics, Whorf's theory has mostly been superseded by the views of Noam Chomsky that there are "linguistic universals," i.e. structures that are common to all languages. That would mean that even if language creates reality, reality is going to contain certain universal constants. In philosophy, on the other hand, Wittgenstein is still regarded by many as the greatest philosopher of the 20th century. But his theory cannot avoid stumbling into an obvious breach of self-referential consistency, for the nature of language would clearly be part of the structure of the world that is supposedly created by the structure of language. Wittgenstein's theory is just a theory about the nature of language, and as such it is merely the creation of his own language game. We don't have to play his language game if we don't want to. By his own principles, we can play a language game where the world has an independent structure, and whatever we say will be just as true as whatever Wittgenstein says. Thus, like every kind of relativism, Wittgenstein's theory cannot protect itself from its own contradiction. Nor can it avoid giving the impression of claiming for itself the very quality, objective truth, that it denies exists. If it does not make that claim, there is no reason why we need pay any attention to it.

Although Protagoras gives us a principle of cognitive relativism, his own main interest was for its consequences in matters of value. Relativism applied to value -- that truths of right and wrong, good and evil, and the beautiful and the ugly, are relative -- is usually called moral relativism. This is inherently a more plausible theory than a general cognitive relativism, for people disagree much more about matters of value than they do about matters of fact. And if we are talking about something like justice or goodness, it is much more difficult even to say what we are talking about than it is when we are talking about things like tables and chairs. We can point to the tables and chairs and assume that other people can perceive them, but we have a much tougher time pointing to justice and goodness. Nevertheless, moral relativism suffers from the same kinds of self-referential paradoxes as cognitive relativism, even if we divorce it from cognitive relativism and place it in a world of objective factual truths. We can see this happen in the most important modern form of moral relativism: cultural relativism.

Cultural relativism is based on the undoubted truth that human cultures are very different from each other and often embody very different values. If Italians and Arabs value female chastity and Tahitians and Californians don't, it is hard to see how we are going to decide between these alternatives, especially if we are Californians. A classic and formative moment in this kind of debate came when a young Margaret Mead went to Sâmoa and discovered that casual sex, non-violence, and an easygoing attitude in general made adolescence in Sâmoa very much different from adolescence back in the United States. Her conclusions are still widely read in her book Coming of Age in Samoa. These discoveries simply confirmed the views of Mead's teacher, Franz Boaz, that a culture could institute pretty much any system of values and that no culture could claim access to any absolute system of values beyond that. Since Boaz and Mead were anthropologists, this gave cultural relativism the dignity, not just of a philosophical theory, but of a scientific discovery. Strong statements about cultural relativism are also associated with another famous anthropologist, and friend of Mead's, Ruth Benedict. Today the anthropological empirical evidence that cultures are different is usually regarded as the strongest support for cultural relativism, and so for moral relativism.

There are several things wrong with this. First of all, Mead's own "discoveries" in Sâmoa were profoundly flawed. What Sâmoans have always known is that Mead was deceived by her teasing adolescent informants and failed to perceive that female chastity was actually highly prized in Sâmoa and that there was very little of anything like "casual sex" going on there in Mead's day. Even in her book there are strange aspects, as when Mead characterizes a certain kind of casual sex as "clandestine rape." That has an odd ring -- until we discover that it really is a kind of rape, not a kind of casual sex. It also turns out that Sâmoan culture is rather far from being non-violent or easygoing [6]. The anthropological world has had a tough time coming to grips with this, because of Mead's prestige and because of the weight of ideological conclusions that has rested on it; but the whole story is now out in a book, Margaret Mead and Samoa, by an anthropologist from New Zealand named Derek Freeman. Now, there actually are other Polynesian cultures, such as in Tahiti, where attitudes about sex seem to be rather freer than they are in Sâmoa or even in the United States [7]. So it might be possible to reargue Mead's case with different data. But the point of this episode is that it shows us how easy it is for an anthropologist with ideological presuppositions to see what they want to see. This kind of "scientific evidence" is a slippery thing and it is too easy to draw the kinds of sweeping conclusions that were drawn about it. If an anthropological study is going to prove a fundamental point about the nature of value, we must be careful about what the point is supposed to be and how such a thing can be supported by evidence.

The great problem with the logic of something like Mead's "discoveries" is that even if we accept that cultures can have some very different values, this still doesn't prove culture relativism: for while cultural relativism must say that all values are relative to a particular culture, a cultural absolutism merely needs to deny that, saying that not all values are relative to a particular culture, i.e. that some values are cultural universals. Thus, Margaret Mead could have visited a hundred Sâmoas and found all kinds of values that were different; but if there is even one value that is common to all those cultures, cultural relativism is refuted. That would be a matter for an empirical study too, although a much more arduous one.

But the deepest problem with cultural relativism and its anthropological vindication, whether by Mead or others, comes when we consider what it is supposed to be. As a methodological principle for anthropology, we might even say that cultural relativism is unobjectionable: anthropologists are basically supposed to describe what a culture is like, and it really doesn't fit in with that purpose to spend any time judging the culture or trying to change it. Those jobs can be left to other people. The anthropologist just does the description and then moves on to the next culture, all for the sake of scientific knowledge. Unfortunately, it is not always possible for an anthropologist to be so detached. Even in Coming of Age in Samoa, Mead clearly means to give us the impression that easygoing Sâmoan ways are better than those of her own culture (or ours). Since, as it turns out, Sâmoan culture wasn't that way after all, we end up with Mead in the curious position of making her own a priori claim about what kinds of cultural values in general are valuable, regardless of who might have them. She didn't just see what she wanted to see, but she saw the better world that she wanted to see. More importantly, cultural relativism, as many anthropologists end up talking about it, gets raised from a methodological principle for a scientific discipline into a moral principle that is supposed to apply to everyone: That since all values are specific to a given culture, then nobody has the right to impose the values from their culture on to any other culture or to tell any culture that their traditional values should be different.

However, with such a moral principle, we have the familiar problem of self-referential consistency: for as a moral value from what culture does cultural relativism come? And as a way of telling people how to treat cultures, does cultural relativism actually impose alien values on traditional cultures? The answer to the first question, of course, is that cultural relativism is initially the value of American and European anthropologists, or Western cultural relativists in general. The answer to the second question is that virtually no traditional cultures have anything like a sense of cultural relativism. The ancient Egyptians referred to their neighbors with unfriendly epithets like "accursed sand-farers" and "wretched Asiatics." In the objects from Tutankhamon's tomb, we can see the king slaughtering various enemies of Egypt, African and Asiatic. The Greeks actually gave us the word "barbarians," which was freely used by the Romans and which we use to translate comparable terms in Chinese, Japanese, etc. Traditional cultures tend to regard themselves as "the people," the "real people," or the "human beings," while everyone else is wicked, miserable, treacherous, sub-human, etc [8].

The result of this is that if we want to establish a moral principle to respect the values of other cultures, we cannot do so on the basis of cultural relativism; for our own principle would then mean that we cannot respect all the values of other cultures. There are going to be exceptions; and it actually isn't too difficult to make a list of other exceptions we might like to make: slavery, human sacrifice, torture, infanticide, female circumcision, and other bodily mutilations of children or criminals. Those are the easy ones. But once given those things, the task before us is clearly a more difficult and sobering one than what we contemplated through the easy out of cultural relativism. On the other hand, we might try to save cultural relativism by denying that it is a moral principle. Of course, if so, nobody would care about it, and there wouldn't be anything wrong with one culture conquering and exterminating another, especially since that has actually been the traditional practice of countless cultures during the ages. Instead, a principle of cultural relativism never enters public debate without it being used as a moral principle to forbid someone from altering or even from criticizing some or all the values of specific cultures. As a practical matter, then it is meaningless to try and save cultural relativism by erasing the moral content that is usually claimed for it.

Cognitive relativisms, of course, will always imply some kind of moral or cultural relativism. Historicism always does that, and, for linguistic relativism, Wittgenstein actually provides us with a nice term for relative systems of value: "forms of life." The hard part is when we then ask if Hitler and Stalin simply had their own "forms of life," which were different from but not better or worse, than ours. Only an ideologue, infatuated with relativism, would answer, "yes." But if we answer "yes," there is, of course, nothing wrong with us defeating and killing Hitler or Stalin. But neither would there be anything wrong with them defeating and killing us. We would have no moral right to try and stop them, but then they would have no moral right to complain about us trying to stop them -- except in terms of their own "form of life," which we don't have to care about. On the other hand, people who talk about "forms of life," and who even might answer "yes" to this kind of question, inevitably make the same move as Protagoras and try to start claiming that their "form of life" is "better" than Hitler's, or ours. So the whole cycle of paradox begins again.

The problem with recognizing the self-contradictory and self-defeating character of relativism is that it does remove the easy out. We may know thereby that there are absolute and objective truths and values, but this doesn't tell us what they are, how they exist, or how we can know them. In our day, it often seems that we are still not one iota closer to having the answers to those questions. Thus, the burden of proof in the history of philosophy is to provide those answers for any claims that might be made in matters of fact or value. Socrates and Plato got off to a good start, but the defects in Plato's theory, misunderstood by his student Aristotle, immediately tangled up the issues in a way that still has never been properly untangled. Most philosophers would probably say today that there has been progress in understanding all these issues, but then the embarrassment is that they mostly would not agree about just in what the progress consists. The relativists still think that progress is to return to what Protagoras thought in the first place. What they really want is that easy out, so as not to need to face the awesome task of justifying or discovering the true nature of being and value.

Ethics

History of Philosophy

Home Page

Copyright (c) 1996, 1998, 1999, 2000, 2008, 2012 Kelley L. Ross, Ph.D. All Rights Reserved

Relativism, Note 1


Protagoras, for his part, admitting as he does that everybody's opinion is true, must acknowledge the truth of his opponents' belief about his own belief, where they think he is wrong.

Theaetetus 171a. F.M. Cornford translation.

Return to text


Relativism, Note 2


Russell was originally trying to resolve paradoxes of self-reference in Set Theory. It was just a happy added benefit for Russell that his theory could be used to save Relativism. But the consensus now is that Set Theory is better off without the Theory of Types.

Return to text


Relativism, Note 3


On a more technical level, there is the question of which "type" the Theory of Types itself belongs to. Each "type" of expression can only refer to the next lower order type of thing (and never to itself), but the Theory of Types obviously refers to all types, and this violates the fundamental principle of the theory.

Return to text


Relativism, Note 4


But notice that Marx and Marxists must fall into a paradox of self-referential consistency anyway: there may be a cognitively absolute standpoint for knowledge, but Marx is not in it. Marx's own consciousness did not depend on a communist, proletarian mode of production; so he cannot really claim to be producing absolute knowledge, much as he would like to. Marx's own "mode of production" was actually to sponge off his relatives and friends, including his friend Engels, who derived his money from the family business--a factory: Engels was himself a capitalist.

Return to text

Relativism, Note 5;
The Whorfian Hypothesis

Benjamin Lee Whorf was fascinated by the grammatical peculiarities of isolated languages like that of the Hopi people of the American Southwest. However, John McWhorter has pointed out that difficult and elaborate systems of both grammar and phonology are indeed characteristic of isolated languages [cf. The Power of Babel, A Natural History of Language, Perennial, 2003]. When languages are in contact with other languages, and especially when a language is often learned by adult speakers of other languages, the grammar and phonology both tend to simplify. The peculiarities melt away, and we end up looking at many things that would qualify as Chomsky's linguistic universals. Thus, trade languages like Malay, which are largely spoken by adults as second languages, have some of the simplest grammars and easiest pronunciations of any languages. English itself has experienced simplification from its Germanic origins and even in comparison to the overlay of French introduced by the Norman conquest. This simplification began with the settlement of Danes and Norwegians in England, who, although their languages were related and comparable to English, nevertheless were learning the new language as adults. English ends up with a grammar where the nouns act as they do in French, while the verbs act as they do in German. This avoids the complexity of French verbs and German nouns, both of which are highly inflected. People learning English may complain about the spelling or the idioms, but they don't need to deal with the challenges of Chinese pronunciation or Russian grammar.

Whorf's thesis that the structure of language imposes a map or paradigm on thought, which the speaker cannot avoid, is refuted by a couple of simple considerations. This imposition would come either from the vocabulary or from the grammar of the language. I heard someone recently, a Chinese native speaker of Chinese, who claimed that, because "White House" in Chinese actually means "White Palace" (), the Chinese language could not express the values of a Constitutional Republic (which doesn't put the President in a palace). However, the Chinese language, which contains other words for domicile, comparable to "house" (, , ), "home" (), or "mansion" () in English, did not force anyone to translate "White House" as "White Palace." That reflected the intention of the translator, not of the language.

In any case, if a person's expression is limited by the available vocabulary of their language, there are three things they can do:  (1) coin a new word, defined with the required meaning; (2) borrow a word from another language which already has the meaning wanted; and (3) use an existing word but give it a new, and perhaps related, meaning. These actions can be combined. For instance, lens in Latin means "lentil" (from lentilis, the adjectival version). This has been borrowed from Latin, not to mean a kind of bean but to mean a piece of glass, shapped a bit like a lentil, that has properties of focusing light. In modern usage, the origin is entirely forgotten. Nota bene:  Each of these actions by itself refutes the thesis that langauge determines thought, for each of them is a manipulation of language to express thoughts that, in the judgment of the speaker, could not previously be expressed save through awkward periphrasis. This also refutes the assertion of S.I. Hayakawa, as a fundamental principle of Semantics, that, "The way you talk determines the way you think." No. In these cases, the way you talk is determined by the way you think and the devices you must use, in the face of poor available vocabulary, to express the thought.

The more serious and plausible side of Whorf's thesis is that the grammar of a language is what imposes the map or paradigm on thought. Here languages like Hopi provide some support, with grammatical inflections that specify all sorts of characteristics of objects (e.g., "long," "short," "sharp," etc.) that we expect that we would often rather not bother with. The remedy for this is a simple one:  Break the rules. Grammar isn't like gravity, and someone speaking Hopi will not fall to their death if they skip some of the inflections required by customary grammar. As it happens, the grammar of languages tends to change and simplify, not when speakers are deliberately breaking the rules, but when they can't help it -- either because they are not native speakers (like the Vikings in England), who cannot remember or get all the rules right, or, more importantly, because they are young. A more complicated grammar is more difficult to learn, and we know how even the relatively simple grammar of English is often mangled by children, or teenagers -- the new, trendy expressions of teenagers may be subversive of traditional grammar.

Thus, the impression that we get from Whorf or Hayakawa, that people are imprisoned by their grammar, is not true. Grammar is fragile, not strong, and in every generation, or with every influx of new but adult speakers, its strength is sorely tested. The more complex and inflexible systems will break down; and indeed we only find them in languages that have been isolated for centuries, situations that also tend to go with very conserative cultures, where there is little call for, or evidence of, much in the way of new ideas. Also, since the rules of grammar are often ambiguous in their application, there is room for variation even without quite breaking them. In turn, however, competent adult speakers can explicitly break the grammatical rules, at need, just as easily as they can coin, borrow, or change words and their meanings.

Return to text

The Linguistic Turn

Philosophy of Science, Linguistics


Relativism, Note 6


Although, outside of Sâmoa, Sâmoans themselves don't always like to admit this. On a personal note, the first thing I ever heard about Sâmoan behavior was from my first wife, who was Part Hawaiian (Hapa Haole) and had lived all her life in Hawaii. Once she happened to mention that Sâmoans in Honolulu had a reputation for violence--e.g. beating up sailors with baseball bats. Years later I saw some Sâmoans on television in Los Angeles, after an incident with the Los Angeles County Sheriff's Department, complaining about the "stereotype" of Sâmoans being violent, when I had never heard any such thing in Los Angeles. I suspect that most Angelenos would be surprised even to know that Sâmoans lived among them, much less have any ideas about what they are like. I only knew the "stereotype" because of my life in Hawaii.

While the crime rate in Sâmoa is a matter for police records, this all seems a matter of one stereotype against another: of Polynesia as a peaceful place of love, beaches, and hulas, as against harsher versions. The reality certainly was harsher: All of Polynesia was ruled by a warrior nobility, the ali'i in Hawai'i, ariki in New Zealand, etc. In Hawai'i some chiefs were so sacred (kapu) that commoners could be killed just for looking at them. War was familiar, though only the introduction of firearms made it possible for someone like Kamehameha I to actually unify so extensive a domain as Hawai'i: the extraordinary final battle of which was Kamehameha driving the army of the King of O'ahu over the spectacular cliff of the Nu'uanu Pali. So no one should be surprised, or ashamed either, that such a heritage could produce a certain ferocity even now, whether in Sâmoa or elsewhere. As the title of a recent movie about the Mâori of New Zealand puts it: Once Were Warriors

Return to text


Relativism, Note 7


The details of sex in Tahiti can be gathered from Robert I. Levy, Tahitians, Mind and Experience in the Society Islands [University of Chicago Press, 1973]. There are also, of course, the famous stories of Hawaiian girls swimming out naked to Captain Cook's ship, or to the later whalers, willing to bestow their charms for as little in return as an iron nail. Captain Cook began posting guards to repel such tender boarders, both out of concern for spreading venereal disease among them and out of worry that the ship might fall apart from all the extracted nails.

With so much free love, we might wonder, how did the inevitable children get supported? And didn't the Polynesians have any concern about parentage? Well, the whole picture may not add up to anything as free, open, and irresponsible as it might seem at first. For one thing, there was a considerable difference between commoners and the nobility (the ali'i in Hawai'i, ari'i in Tahiti, ariki in New Zealand, etc.). The nobility definitely were very concerned about parentage, since their status depended on their genealogies, which were remembered and chanted in care and detail. It is unlikely that there were any naked ali'i girls swimming out to the sailors.

In the second place, there are reports from various parts of the Pacific that an out of wedlock child, as evidence of fertility and health, enhanced a girl's marriage prospects. A girl only began to be considered "loose" if she had more than one premarital child. At a time when people did not live long, and it was common for women to die in childbirth, it is reasonable to suppose that marriageable girls would really not have much time for extra premarital pregnancies, and that few would want to risk continued pregnancies without the social connection that marriage would bestow.

At the same time, the care and status of any extramarital, or even marital, children was assured for other reasons. If Hawai'i is at all representative of the rest of Polynesia and the Pacific, then the institution of adoption or fosterage was fully capable of absorbing any children, premarital or otherwise, that a woman might not want to raise herself. In Hawaiian, "hânai" means (as a verb) "to raise, feed, nourish, etc." and (as a noun or adjective) "foster/adopted child." There hardly seems to be a difference between hânai fosterage and adoption, since the children were usually fully informed and aware of their natural parents, and reckoned their descent from them. Thus, Queen Lili'uokalani (1838-1917) was not raised by her natural parents but knew who they were and was fully conscious of her royal descent. Thus, there was no shame or secrecy about adoption, and any inconvenience occasioned by out of wedlock birth could be accommodated without stigma or disruption.

While it is tempting to praise these arrangements as humane and sensible, which they certainly seem to be, the viability of the institution really depended on a couple of factors that may no longer be possible:  One was the absence, as far as we can tell, of venereal disease. Today, much extra-marital sex runs the risk, not only of catching and passing fatal disease, but of courting sterility through less serious, but nevertheless damaging, infections. Also, the ease of hânai adoption depended on the casualness with which children could be circulated -- implying too a reciprocity among people who basically all knew each other. This becomes emotionally and legally rather more difficult in a larger, more impersonal, and legalistic society. Nevertheless, we might say that the modern prudent use of birth control, which limits unwanted pregnancies, and the restrained and prudent conduct of a small number of premarital sexual relationships, with an eye to avoiding disease, now has tended to reproduce the more restrained version of Polynesian sexual activity, rather more restricted than Meade's Sâmoa, but somewhat more open than the actual Sâmoa (where a victim of "clandestine rape" could only preserve her prospects in life by marrying the rapist).

All these considerations, of course, speak rather more for the universality of human nature, which adapts to circumstances, than for cultural relativism.

Return to "Relativism"

Return to "Gender Stereogypes and Sexual Archetypes"


Relativism, Note 8


The German word for "German" is Deutsch, which meant "of the people" and is related to theoda in Old English, to "Dutch" in Modern English, and to another Roman word for Germans, "Teutons." In the movie Little Big Man, considerable humor is derived from Chief Dan George speaking of his own people as the "human beings" and of others being adopted into the tribe as "becoming human beings." Islâm traditionally divides the whole world into the Dâru l'Islâm, "the House of Islâm," and the Dâru lH.arb, "the House of War," which means the realm of everybody else, where Islâm is ready to carry on the holy war (Jihâd). Every single one of these peoples and traditions regarded their ways and their values as best and everyone else's as deficient or terrible.

Return to text