Question Description
Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? for CAT 2024 is part of CAT preparation. The Question and answers have been prepared
according to
the CAT exam syllabus. Information about Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? covers all topics & solutions for CAT 2024 Exam.
Find important definitions, questions, meanings, examples, exercises and tests below for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer?.
Solutions for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? in English & in Hindi are available as part of our courses for CAT.
Download more important topics, notes, lectures and mock test series for CAT Exam by signing up for free.
Here you can find the meaning of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of
Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer?, a detailed solution for Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? has been provided alongside types of Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? theory, EduRev gives you an
ample number of questions to practice Read the passage carefully and answer the following questions:Once upon a time — just a few years ago, actually — it was not uncommon to see headlines about prominent scientists, tech executives, and engineers warning portentously that the revolt of the robots was nigh. The mechanism varied, but the result was always the same: Uncontrollable machine self-improvement would one day overcome humanity. A dismal fate awaited us.Today we fear a different technological threat, one that centers not on machines but other humans. We see ourselves as imperilled by the terrifying social influence unleashed by the Internet in general and social media in particular. We hear warnings that nothing less than our collective ability to perceive reality is at stake, and that if we do not take corrective action we will lose our freedoms and way of life.Primal terror of mechanical menace has given way to fear of angry primates posting. Ironically, the roles have reversed. The robots are now humanity’s saviors, suppressing bad human mass behavior online with increasingly sophisticated filtering algorithms. We once obsessed about how to restrain machines we could not predict or control — now we worry about how to use machines to restrain humans we cannot predict or control. But the old problem hasn’t gone away: How do we know whether the machines will do as we wish?The shift away from the fear of unpredictable robots and toward the fear of chaotic human behavior may have been inevitable. For the problem of controlling the machines was always at heart a problem of human desire — the worry that realizing our desires using automated systems might prove catastrophic. The promised solution was to rectify human desire. But once we lost optimism about whether this was possible, the stage was set for the problem to be flipped on its head.The twentieth-century cyberneticist Norbert Wiener made what was for his time a rather startling argument: "The machine may be the final instrument of doom, but humanity may be the ultimate cause." In his 1960 essay “Some Moral and Technical Consequences of Automation,” Wiener recounts tales in which a person makes a wish and gets what was requested but not necessarily what he or she really desired. Hence, its imperative that we be absolutely sure of what desire we put into the machine. Wiener was of course not talking about social media, but we can easily see the analogy: It too achieves purposes, like mob frenzy or erroneous post deletions, that its human designers did not actually desire, even though they built the machines in a way that achieves those purposes. Nor does he envision, as in Terminator, a general intelligence that becomes self-aware and nukes everyone. Rather, he imagined a system that humans cannot easily stop and that acts on a misleading substitute for the military objectives humans actually value.However, there is a risk in Wiener’s distinction between what we desire and what actually happens in the end. It may create a false image of ourselves — an image in which our desires and our behaviors are wholly separable from each other. Instead of examining carefully whether our desires are in fact good, we may simply assume they are, and so blame bad behavior on the messy cooperation between ourselves and the “system.”Q.The risk in Wieners distinction between what we desire and what actually happens, in the end, is that:a)We may convince ourselves that our desires are always good and instead choose to blame others for their inability to distinguish between right and wrong.b)Although our desires may be good, our inability to achieve these objectives may force us to adopt questionable behavioural practices.c)We believe that our desires are inherently good and blame other factors when these intentions lead to undesirable behaviour.d)We believe that our actions are oriented towards our desires when they are actually contradictory in nature.Correct answer is option 'C'. Can you explain this answer? tests, examples and also practice CAT tests.