CAT Exam  >  CAT Questions  >  Read the passage carefully and answer the fol... Start Learning for Free
Read the passage carefully and answer the following questions:
Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.
Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.
Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?
The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.
The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, it's imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.
However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”
Q. The risk in Wiener's distinction between what we desire and what actually happens, in the end, is that:
  • a)
    We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.
  • b)
    Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.
  • c)
    We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.
  • d)
    We believe that our actions are oriented towards our desires when they are actually contradictory in nature.
Correct answer is option 'C'. Can you explain this answer?
Verified Answer
Read the passage carefully and answer the following questions:Once upo...
In the last paragraph, the author posits the following in regard to Wiener's distinction- "Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.” That is, we assume, without proper examination, that our desires are good, and instead choose to blame the system and other factors for the bad consequences. Option C conveys this idea and is the answer. 
Options  B and D have not been implied in the passage.
Option A is close, but the latter part is extraneous to the discussion.
View all questions of this test
Most Upvoted Answer
Read the passage carefully and answer the following questions:Once upo...
In the last paragraph, the author posits the following in regard to Wiener's distinction- "Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.” That is, we assume, without proper examination, that our desires are good, and instead choose to blame the system and other factors for the bad consequences. Option C conveys this idea and is the answer. 
Options  B and D have not been implied in the passage.
Option A is close, but the latter part is extraneous to the discussion.
Free Test
Community Answer
Read the passage carefully and answer the following questions:Once upo...
Risk in Wiener's Distinction between Desire and Outcome

False Image of Ourselves:
- The risk in Wiener's distinction lies in creating a false image where we believe our desires are always good.
- This false image may lead us to blame external factors when our desires result in undesirable behaviors.

Assumption of Inherent Goodness:
- Instead of critically examining whether our desires are truly beneficial, we may assume they are inherently good.
- This assumption can lead us to absolve ourselves of responsibility and blame external systems for any negative outcomes.

Shift of Accountability:
- By focusing solely on our desires and not the actual consequences of our actions, we may shift accountability away from ourselves.
- This can prevent us from reflecting on our intentions and behaviors, leading to a cycle of blame and avoidance.
In conclusion, the risk in Wiener's distinction is that it may foster a belief in the inherent goodness of our desires, leading to a lack of introspection and accountability for our actions. Instead of examining our intentions critically, we may choose to blame external factors for any negative outcomes, thereby perpetuating a cycle of misunderstanding and avoidance.
Attention CAT Students!
To make sure you are not studying endlessly, EduRev has designed CAT study material, with Structured Courses, Videos, & Test Series. Plus get personalized analysis, doubt solving and improvement plans to achieve a great score in CAT.
Explore Courses for CAT exam

Similar CAT Doubts

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.According to the author, what was the reason for the shift toward the fear of chaotic human behavior?

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.In the third paragraph, why does the author remark that "ironically, the roles have reversed?"?

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Which of the following could be an example of Weiners desire-outcome disparity argument?I. A weapons system, which cannot be stopped easily, starts bombing after receiving an erroneous command.II. An AI program developed to mitigate global warming starts eliminating a fraction of the human population to complete its objective.III. A Social media platform allows groups of militants to communicate their plans and coordinate their attacks.

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.Why does the author term Norbert Weiners argument as startling?

Directions: Read the following passage and answer the given question. Certain words are printed in bold to help you locate them while answering the question.Technological change is recognised as one of the main drivers of long-term growth. In the coming decades, radical innovations such as mobile internet and cloud computing are likely to revolutionise production processes, particularly in developing countries.It is undebatable that technology makes production processes more efficient, thereby increasing the competitiveness of countries and reducing their vulnerability to market fluctuations. Structural change, i.e. the transition from a labour-intensive to a technology-intensive economy, drives economic upgrading. Low income countries thus acquire the necessary capabilities to catch up and reduce the gap with per capita incomes in high income countries.Catching up, unfortunately, does not occur frequently. In the last 50 years, only a few countries were successful in rapidly industrialising and achieving sustained economic growth. Technology was always a key driver in these cases and they successfully developed an advanced technology-intensive industry.Though technology is linked to sustainable growth, it is uncertain whether it can simultaneously create social inclusiveness and environmental sustainability. Technological change also requires the labour force to be prepared to use increasingly complex machinery and equipment, which widens the inequality between highly skilled and unskilled workers in terms of wage distribution. Industrialisation has historically been accompanied by increasing pollution and the depletion of natural resources. Economic growth also entails a rise in the use of inputs, materials and fossil fuels, which generate environmental pollution and degradation, especially in low income countries.From an economic point of view, globalisation and the fragmentation of production at international level have facilitated the diffusion of new technologies through the intensification of trade in sophisticated manufacturing goods. However, this diffusion of technology has in many cases not translated into concrete growth opportunities due to the lack of technological capabilities and the capacity of countries to promote innovation systems. Innovation needs to be supported by appropriate interventions that strengthen the process from technology invention to adoption by firms as was the case in benchmark countries such as China and the Republic of Korea.Even though technology and automation generally improve people's working conditions, the number of jobs may decrease as a result, with workers being replaced by machines. But, the technological change itself can mitigate this effect. New technologies also generate new markets, for example the waste and recycling industry, reduce the prices of consumer goods and provide opportunities for new investments with higher levels of profitability. Most importantly, the expansion of new technologically-intensive industries absorbs those workers who have lost their jobs to machines.Despite these positive dynamics, the current trend of technological change does not guarantee that we will follow a sustainable path in the future.Q. In many countries, the desired results of technology could not be achieved due to

Top Courses for CAT

Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer?
Question Description
Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? for CAT 2024 is part of CAT preparation. The Question and answers have been prepared according to the CAT exam syllabus. Information about Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? covers all topics & solutions for CAT 2024 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer?.
Solutions for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? in English & in Hindi are available as part of our courses for CAT. Download more important topics, notes, lectures and mock test series for CAT Exam by signing up for free.
Here you can find the meaning of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer?, a detailed solution for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? has been provided alongside types of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? tests, examples and also practice CAT tests.
Explore Courses for CAT exam

Top Courses for CAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev