Robots are constrained by the Three Laws: they don't have a lot of choice about what they can do. For instance, if a robot sees someone in danger, it has to act. In I, Robot, it may seem like Asimov is setting up a distinction between robots and humans: robots have no choice, humans do. But at the end of the book, in "The Evitable Conflict," Susan Calvin points out that humans have always had limits on what they can choose. So it almost seems as if humans don't have much choice either. Does that seem right?
Questions About Choices
- This is the big question: how much free will do individual humans have in these stories? For instance, when Milton Ashe is attracted to a pretty woman, is that a choice he has made? When Bogert is ambitious, has he made a choice to be ambitious?
- As a variation on that last question, how much free will does humanity have as a whole here? Do we lose the ability to make choices when the Machines start to help us? Do you agree with Susan Calvin's argument that humanity never had any real ability to control our destiny (Evitable Conflict.225)?
- What happens in these stories when human choices conflict? For instance, when Bogert's choice to become Director of Research conflicts with Lanning's choice to stay on as Director of Research. How do these conflicts get resolved?
- The Three Laws limit the robot choices in these stories; but what limits human choices? Donovan and Powell discuss rules of behavior and government laws certainly seem to limit certain options in these stories. Are there other ways that human choices are limited?
Chew on This
Try on an opinion or two, start a debate, or play the devil’s advocate.
Since humans are constrained by certain rules, just like robots, I, Robot shows us how robotic we are—or how human our robots are.
In I, Robot, both humans and robots have the ability to make choices; but once they understand what the best choice is, there's no reason for them to make choices.