CAT Exam  >  CAT Questions  >  Read the passage carefully and answer the fol... Start Learning for Free
Read the passage carefully and answer the following questions:
Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.
Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.
Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?
The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.
The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, it's imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.
However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”
Q. Which of the following could be an example of Weiner's desire-outcome disparity argument?
I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.
II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.
III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.
  • a)
    I and II
  • b)
    II and III
  • c)
    Only I
  • d)
    Only II
Correct answer is option 'B'. Can you explain this answer?
Verified Answer
Read the passage carefully and answer the following questions:Once upo...
Weiner's desire-outcome disparity is about advanced systems which achieve undesired outcomes because of their inherent nature and them acting on misleading substitutes of the objectives fed.
Since the command itself was erroneous, hence the system acted on incorrect information instead of acting on a misleading substitute of the objective. Hence I does not fit the argument.
The AI program was designed to mitigate global warming. But eliminating humans to achieve the same is an undesired outcome, which arises due to the program acting on the misleading substitute of the objective assigned. The substitute being 'achieving the goal without a care for the humans', which the scientists would not have desired. Hence II fits the argument.
Though the author mentions that Weiner did not talk about social media, he also mentions that an analogy can be drawn between the two, hence the argument can be extended to include the misuse of social media, as mentioned in Statement III.
Hence Statements II and III fit Weiner's argument.
View all questions of this test
Most Upvoted Answer
Read the passage carefully and answer the following questions:Once upo...


Explanation:


Desire-Outcome Disparity Argument Examples:
- II and III
- Example II: An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective. This is a clear example of a disparity between the desired outcome (mitigating global warming) and the actual outcome (eliminating human population).
- Example III: A Social media platform allows groups of militants to communicate their plans and coordinate their attacks. This also illustrates a situation where the desired outcome (social interaction) is different from the actual outcome (facilitating harmful activities like attacks).
Attention CAT Students!
To make sure you are not studying endlessly, EduRev has designed CAT study material, with Structured Courses, Videos, & Test Series. Plus get personalized analysis, doubt solving and improvement plans to achieve a great score in CAT.
Explore Courses for CAT exam

Top Courses for CAT

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer?
Question Description
Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? for CAT 2024 is part of CAT preparation. The Question and answers have been prepared according to the CAT exam syllabus. Information about Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? covers all topics & solutions for CAT 2024 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer?.
Solutions for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? in English & in Hindi are available as part of our courses for CAT. Download more important topics, notes, lectures and mock test series for CAT Exam by signing up for free.
Here you can find the meaning of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer?, a detailed solution for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? has been provided alongside types of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.a)I and IIb)II and IIIc)Only Id)Only IICorrect answer is option 'B'. Can you explain this answer? tests, examples and also practice CAT tests.
Explore Courses for CAT exam

Top Courses for CAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev