CAT Exam  >  CAT Questions  >  Read the passage carefully and answer the fol... Start Learning for Free
Read the passage carefully and answer the following questions:
Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.
Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.
Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?
The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.
The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, it's imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.
However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”
Q. According to the author, what was the reason for the shift toward the fear of chaotic human behavior?
  • a)

    Humans lost hope in their ability to reform their desires.
  • b)
    Humans doubted the purity of their desires.
  • c)
    Humans realised that they are more erratic and unpredictable than machines.
  • d)
    Humans discerned that machines are mere tools that aid in the realisation of immoral desires.
Correct answer is option 'A'. Can you explain this answer?
Verified Answer
Read the passage carefully and answer the following questions:Once upo...
In the fourth paragraph, the author mentions that humans feared the consequences of attaining their desires through complicated machines. And hence, they attempted to rectify their desires. However, when they lost hope in this endeavour, they shifted away from the fear of unpredictable robots and toward the fear of chaotic human behaviour, flipping the problem on its head.
Option A conveys the idea elucidated above. Option A is the answer.
Option B is partially correct but misses out on the remedying theme.
Options C and D have not been implied in the passage.
View all questions of this test
Most Upvoted Answer
Read the passage carefully and answer the following questions:Once upo...
In the fourth paragraph, the author mentions that humans feared the consequences of attaining their desires through complicated machines. And hence, they attempted to rectify their desires. However, when they lost hope in this endeavour, they shifted away from the fear of unpredictable robots and toward the fear of chaotic human behaviour, flipping the problem on its head.
Option A conveys the idea elucidated above. Option A is the answer.
Option B is partially correct but misses out on the remedying theme.
Options C and D have not been implied in the passage.
Free Test
Community Answer
Read the passage carefully and answer the following questions:Once upo...
Reason for the shift toward the fear of chaotic human behavior:

Loss of hope in reforming desires:
- The author suggests that the shift towards fear of chaotic human behavior was inevitable due to humans losing hope in their ability to reform their desires.
- Initially, the fear was centered around uncontrollable machine self-improvement, but as humans realized the problem lay in their desires, the focus shifted.
- Controlling machines was ultimately a problem of human desire - the concern that automated systems fulfilling our desires could have catastrophic consequences.
- The idea of rectifying human desire as a solution started to wane, leading to the fear of chaotic human behavior taking precedence.
- This shift signifies a growing acknowledgment of the unpredictability and uncontrollability of human actions compared to machines.
- Therefore, the loss of optimism in reforming human desires paved the way for the fear of human behavior to become the primary technological threat.
In conclusion, the author implies that the shift in fear from machines to humans occurred because of the realization that human desires are complex and difficult to control or predict, leading to a sense of apprehension about the potential consequences of human behavior in the technological realm.
Attention CAT Students!
To make sure you are not studying endlessly, EduRev has designed CAT study material, with Structured Courses, Videos, & Test Series. Plus get personalized analysis, doubt solving and improvement plans to achieve a great score in CAT.
Explore Courses for CAT exam

Similar CAT Doubts

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Why does the author term Norbert Weiners argument as startling?

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.In the third paragraph, why does the author remark that "ironically, the roles have reversed?"?

DIRECTIONS for the question:Read the passage and answer the question based on it.Let us consider a very simple example. Some earth-moving job has to be done in an area of high unemployment. There is a wide choice of technologies, ranging from the most modern earth- moving equipment to purely manual work without tools of any kind. The output is fixed by the nature of the job, and it is quite clear that the capital / output ratio will be highest, if the input of capital is kept lowest. If the job were done without any tools, the capital/output ratio would be infinitely large, but the productivity per man would be exceedingly low. If the job were done at the highest level of modern technology, the capital/output ratio would be low and the productivity per man very high.Neither of these extremes is desirable, and a middle way has to be found. Assume some of the unemployed men were first set to work to make a variety of tools, including wheel-barrows and the like, while others were made to produce various wages goods. Each of these lines of production in turn could be based on a wide range of different technologies, from the simplest to the most sophisticated. The task in every case would be to find an intermediate technology which obtains a fair level of productivity without having to resort to the purchase of expensive and sophisticated equipment. The outcome of the whole venture would be an economic development going far beyond the completion of the initial earth-moving Project. With a total input of capital from outside which might be much smaller than would have been involved in the acquisition of the most modern earth-moving equipment, and an input of (previously unemploye d) labour much greater than the modern method would have demanded, not only a given project would have been completed, but a whole community would have been set on the path of development.I say, therefore, that the dynamic approach to development, which treats the choice of appropriate, intermediate technologies as the central issue, opens up avenues of constructive action, which the static, econometric approach totally fails to recognise. This leads to the next objection which has been raised against the idea of intermediate technology. It is argued that all this might be quite promising if it were not for a notorious shortage of entrepreneurial ability in the under-developed countries. This scarce resource should therefore be utilised in the most concentrated way, in places where it has the best chances of success and should be endowed with the finest capital equipment the world can offer. Industry, it is thus argued, should be established in or near the big cities, in large integrated units, and on the highest possible level of capitalisation per workplace.The argument hinges on the assumption that entrepreneurial ability is a fixed and given quantity, and thus again betrays a purely static point of view. It is, of course, neither fixed nor given, being largely a function of the technology to be employed. Men, quite incapable of acting as entrepreneurs on the level of modern technology, may nonetheless be fully capable of making a success of a small-scale enterprise set up on the basis of intermediate technology - for reasons already explained above. In fact, it seems to me, that the apparent shortage of entrepreneurs in many developing countries today is precisely the result of the negative demonstration effect of a sophisticated technology infiltrated into an unsophisticated environment. The introduction of an appropriate, intermediate technology would not be likely to founder on any shortage of entrepreneurial ability. Nor would it diminish the supply of entrepreneurs for enterprises in the modem sector; on the contrary, by spreading familiarity with systematic, technical modes of production over the entire population it would undoubtedly help to increase the supply of the required talent.Q.Which of the following cannot be inferred from the passage?

Top Courses for CAT

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer?
Question Description
Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? for CAT 2024 is part of CAT preparation. The Question and answers have been prepared according to the CAT exam syllabus. Information about Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? covers all topics & solutions for CAT 2024 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer?.
Solutions for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? in English & in Hindi are available as part of our courses for CAT. Download more important topics, notes, lectures and mock test series for CAT Exam by signing up for free.
Here you can find the meaning of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer?, a detailed solution for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? has been provided alongside types of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?a)Humans lost hope in their ability to reform their desires.b)Humans doubted the purity of their desires.c)Humans realised that they are more erratic and unpredictable than machines.d)Humans discerned that machines are mere tools that aid in the realisation of immoral desires.Correct answer is option 'A'. Can you explain this answer? tests, examples and also practice CAT tests.
Explore Courses for CAT exam

Top Courses for CAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev