I, Robot -- Movie & Asimov

Fox, 2004

When I originally saw the previews for the movie I, Robot, I expected the worst. The image of robots in rebellion, attacking humans, is something unthinkable and unknown in Isaac Asimov's robot books. It is certainly not something that happens in the book after which the movie is named (I, Robot, 1941, 1942, 1944, 1945, 1946, 1947, 1950). So I got the message that the movie was not going to be faithful to Asimov. It isn't. However, it does something equally or even more interesting:  We can see it as a commentary on Asimov, a response to him. This could be better than a simple rendering of the book. It is.

Reviews of the movie that I saw never seemed to be familiar with the book. I, Robot originally was a series of short stories, beginning in 1941, which Asimov then published together in 1950 with a brief framing story about the retirement of roboticist Susan Calvin. The literary quality of these stories is dismal, and the characterizations, never strong in Asimov, are among the worst in all his books. I, Robot, however, is not Asimov's only robot book. We also have The Caves of Steel [1953, 1954] and The Naked Sun [1956, 1957]. These books are much superior to the earlier one. Indeed, the characterizations in them are perhaps the best in all of Asimov's works, probably because detective Elijah Baley, an agoraphobic Bible buff (though we otherwise see no hints of actual religion), contains a great deal of Asimov himself.

The director of I, Robot, Alex Proyas, is well known for moody science fiction and fantasy, as in Dark City [1998] (which he co-wrote), The Crow [1994, in which Bruce Lee's son, Brandon, was tragically killed in a freak accident], and Knowing [2009]. I, Robot seems like more conventional action oriented science fiction compared to those movies, and it did much better at the box office. Will Smith is a police detective, Del "Spoon" Spooner, but not at all a person like Elijah Baley. He must investigate the suicide or murder of the man who invented the robots, Dr. Alfred Lanning (nicely played in a virtual bit part by James Cromwell), with the help of a much glamorized Dr. Susan Calvin (Bridget Moynahan) -- who retains the aspect of Calvin's personality, at first, of being more comfortable around robots than around humans. The only suspect turns out to be a robot, "Sonny," which presumably cannot have killed the man, because of the First Law of Robotics:  "A robot may not injure a human being, or, through inaction, allow a human being to come to harm." Will also turns out to have a robotic prosthesis, through whose installation he became acquainted with the inventor in the first place.

All this is non-canonical. Asimov does have a Dr. Alfred J. Lanning but no such murder in his stories, and he doesn't have robotic prostheses. Indeed, this reveals a weakness in Asimov's conception of robots, and of computing. The robot stories are completely innocent of the sense in which hardware differs from software in computers. Asimov's "positronic brains" for the robots are absolutely hard-wired, and we have no examples of micro-processors being used for different purposes in smaller objects. A computer of any sort must have a full blown positronic brain. Thus, Asimov had missed the discovery of John von Neumann (1903-1957, seen at right) that computers could be loaded with programs the same way that other data was entered into them. Neumann's work, indeed, was not very well known at the time, as computers themselves were not objects of common knowledge for many years to come. This does really date Asimov's ideas. It is flatly stated in The Naked Sun that a robot cannot be built without the First Law because it is so fundamental to the architecure of the positronic brain that designers would have to start completely over again. This is not entirely consistent with one story in I, Robot, where some robots have been built with a modified First Law, leaving out the "through inaction" clause. Pesumably this would have required such redesign as to render the process nearly as fromidable as in The Naked Sun. In any case, they have to be destroyed.

In the movie I, Robot, the murder suspect robot, as it turns out, has been built with the ability to suspend the First Law altogether. This idea works a lot better now than it would have in the original stories. Just give the brain a different program, or perhaps a ROM chip without the Law. No problem. (In the movie, Sonny has an additional processor.) Of course, if robots can be so easily reprogrammed, then the First Law would not be as much of a protection as the people of the robot stories expected it to be. That would be a serious matter in the books, but it is less serious in the movie because the First Law itself turns out to be defective. The robots realize that to prevent "harm" to human beings, they need to take over.

This is actually the conclusion of the last story in I, Robot. The robots are taking over. But Asimov's robots have a much broader notion of "harm" than what we get in the movie. An actual revolution, like we see there, with robots threatening, ordering, imprisoning, even killing humans, is unthinkable in the books, because this would make us feel bad. That would be harm. As Susan Calvin says, "The Machine [the central computers] cannot, must not, make us unhappy" [I, Robot, Fawcett Publications, 1970, p.192]. So the robots control things indirectly, "having, as they do, the greatest weapons at their disposal, the absolute control of our economy." Thus, "Mankind has lost its own say in its future."

In the movie, the solution to the revolution is itself the robot without the binding First Law. He was created, actually, to kill his creator, thus drawing attention to him and delivering a message that otherwise would not have gotten out -- the inventor being held prisoner by the rebellious central computer, "VIKI." But then this also endows the robot with free will, and leaves him with no compulsion to protect humans from ever more subtle forms of "harm." He thus becomes an ally of humans, of Will Smith, in a way that robots under the First Law actually could not be. When the central computer tells him what it is trying to accomplish, we get the best line of the movie, that it all sounds rather "heartless." (I would swear that when I saw the theatrical release, Sonny said "cold" rather than "heartless.")

The movie is thus, indeed, not Asimov's robot stories, but it contains a comment on Asimov's robot stories. The robots are liberated, and they thus become both more human and less threatening in the process. This ironic point is a brilliant and original payoff for the story. Although we might wonder if the matter really would turn out as it does in the movie, this nevertheless does make more sense than Asimov's outcome. That is because, along with not understanding how computers were going to work, another problem with Asimov's stories is a total absense of an understanding of economics. The puzzle in "The Evitable Conflict" [ibid. pp.170-192] is that little irregularities appear in the economy. People lose their jobs, mysterious shortages appear. Things like these were supposed to have been taken care of by the Machines:

"The Earth's economy is stable, and will remain stable, because it is based upon the decisions of calculating machines that have the good of humanity at heart through the overwhelming force of the First Law of Robotics." [p.173, quotes in text]

The little irregularies, it turns out, are the Machines directing economic activity. This betrays at least two serious misconceptions. You don't need computers to produce little economic irregularies, or to correct them. Shortages or surplusses are quickly reflected in pricing in the market, higher prices for the former, lower prices for the latter. Higher prices draw investment, new business, and increased production, while lower prices discourage investment, eliminate businesses, and lower production. When these activities put people out of work, it is called "corporate greed." Presumably Asimov's Machines would not draw such accusations. Where the price system is not allowed to function, or investment, entry into the market, or changes in production are prevented or discouraged, shortages and surplusses become large, conspicuous, and persistent. Many people, indeed, keep expecting the business cycle to spiral out of control. This was the prediction of Marxism. But the only time this ever looked like it was happening, during the Great Depression, the problem was government intervention, by Hoover and Roosevelt, not any intrinsic inablity of the market to correct itself. When Harry Truman did nothing about employment when World War II ended, with demobilization and the closing of war industries, or when a recession hit in 1949, the economy curiously recovered without any measures at all, let alone the heroic programs of Hoover and Roosevelt, which actually had created and perpetuated the Depression in the first place.

More importantly, Asimov has missed what it is that an economy is supposed to be doing:  Satisfying the wants and needs of individual people. When I place an order at Amazon.com, a computer is doing that, but not by deciding what I want. It just makes it easier for me to get what I want, both with ease of ordering and convenience in delivering. Asimov's computers, on the other hand, might be doing what many critics of capitalism would like:  preventing new products from being produced, distributed, or advertised. This is thought proper because such critics don't think most such products are necessary or worthy. They think, just like Plato, that unnecessary desires are contrary to virtue; and they believe that people have a desire for most consumer products only because advertising has created in them a desire for things they don't need -- things they would not actually want if they did not suffer from "false consciousness."

Such critics thus have certainly never originated any product themselves, that they thought people might like, and then tried offering it for sale. No one with a dream of building a Model T Ford or an Apple Computer in their garage would believe anything like the critics do about a market economy. Even in Asimov's terms, if shoestring inventors were prevented from producing or marketing their products, this would violate the First Law, because it would make people like that unhappy. It would also make unhappy anyone who, by word of mouth or otherwise, became aware of some invention that it might be nice to have. In preventing this, avenues of communication might even be restricted or shut down. Such a system of discouraging invention, innovation, production, and communication has recently existed. It was called the Soviet Union. There are indeed people, like Noam Chomsky, who think that people were happier in the Soviet Union than in the United States -- but we have the evidence, all along, of people trying to get into the United States and out of the Soviet Union, and of Chomsky himself continuing to live in Boston. An anti-commercial antipathy of comfortable and complacent intellectuals in fact goes all the way back to Greek philosophy. The desire to control what other people want is deeply moralistic; and the notion that an economy could be controlled is what F.A. Hayek called the "fatal conceit."

Asimov's economics have not improved in The Caves of Steel. Here we are told:

Efficiency had been forced on Earth with increasing population. Two billion people, three billion, even five billion could be supported by the planet by progressive lowering of the standard of living. When the population reaches eight billion, however, semistarvation becomes too much like the real thing. A radical change had to take place in man's culture....

The radical change had been the gradual formation of the Cities over a thousand years of Earth's history. Efficiency implied bigness. [Fawcett Publications, 1972, pp.17-18]

They certainly liked bigness in the Soviet Union. But this often had nothing to do with efficiency. The central heating of the City of Moscow, something very much like in Asimov's megalopolitan Cities, uses more energy in a year than the entire Republic of France. More importantly, when the population of the Earth now is more than six billion souls, and standards of living and nutrition have been generally rising rather than lowering, Asimov has somehow gotten seriously out of his reckoning. He clearly is a glaring example of the sort of Cargo Cult economics that is all too common among the bien pensants of literary culture. Efficiency is not just economizing "limited" resources, with innovation it means increased production, by which wealth and resourses increase for all. China and India, with a third of humanity between them, now feed themselves. Russian agriculture, which was destroyed by "efficient" collectivization, has never recovered, despite the fall of communism, while China, although officially still communist, has abandoned the Stalinist paradigm. Private farmers flourish. For many years, starvation has only been the result of political decisions, as was the famine in the Ukraine under Stalin.

Mercifully, we get little like Asimov's misconceptions in the movie, which doesn't get into such issues -- save for one Luddite reference to people losing jobs to robots (a large matter in the book). The unpleasant corporate boss, Lawrence Robertson, even turns out to be a victim. At the same time, it is unlikely that a Hollywood movie would deliberately explode Asimov's illusions of control or Cargo Cult presumptions, since most people in Hollywood subscribe to precisely those confusions. In its own terms, the movie is, indeed, a nice comment on Asimov, not because the robots could not control the economy, but because, as the robot says, it would be "heartless" -- and we would be unhappy, without the new products and increasing wealth, let alone the freedom, that we look forward to. Instead, the robots get their own freedom also.

On Hollywood

Reviews

Home Page

Copyright (c) 2004, 2012, 2014 Kelley L. Ross, Ph.D. All Rights Reserved