1 Crore+ students have signed up on EduRev. Have you? Download the App |
Read the passage carefully and answer the questions that follow:
The word “bias” commonly appears in conversations about mistaken judgments and unfortunate decisions. We use it when there is discrimination, for instance against women or in favor of Ivy League graduates. But the meaning of the word is broader: A bias is any predictable error that inclines your judgment in a particular direction. For instance, we speak of bias when forecasts of sales are consistently optimistic or investment decisions overly cautious.
Society has devoted a lot of attention to the problem of bias — and rightly so. But when it comes to mistaken judgments and unfortunate decisions, there is another type of error that attracts far less attention: noise. To see the difference between bias and noise, consider your bathroom scale. If on average the readings it gives are too high (or too low), the scale is biased. If it shows different readings when you step on it several times in quick succession, the scale is noisy. While bias is the average of errors, noise is their variability.
Although it is often ignored, noise is a large source of malfunction in society. In a 1981 study, for example, 208 federal judges were asked to determine the appropriate sentences for the same 16 cases. The cases were described by the characteristics of the offense (robbery or fraud, violent or not) and of the defendant (young or old, repeat or first-time offender, accomplice or principal). The average difference between the sentences that two randomly chosen judges gave for the same crime was more than 3.5 years. Considering that the mean sentence was seven years, that was a disconcerting amount of noise. Noise in real courtrooms is surely only worse, as actual cases are more complex and difficult to judge than stylized vignettes. It is hard to escape the conclusion that sentencing is in part a lottery, because the punishment can vary by many years depending on which judge is assigned to the case and on the judge’s state of mind on that day. The judicial system is unacceptably noisy.
Noise causes error, as does bias, but the two kinds of error are separate and independent. A company’s hiring decisions could be unbiased overall if some of its recruiters favor men and others favor women. However, its hiring decisions would be noisy, and the company would make many bad choices. Where does noise come from? There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions. Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.
A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.
Q. Which of the following statements is the author most likely to agree with?
Read the passage carefully and answer the questions that follow:
The word “bias” commonly appears in conversations about mistaken judgments and unfortunate decisions. We use it when there is discrimination, for instance against women or in favor of Ivy League graduates. But the meaning of the word is broader: A bias is any predictable error that inclines your judgment in a particular direction. For instance, we speak of bias when forecasts of sales are consistently optimistic or investment decisions overly cautious.
Society has devoted a lot of attention to the problem of bias — and rightly so. But when it comes to mistaken judgments and unfortunate decisions, there is another type of error that attracts far less attention: noise. To see the difference between bias and noise, consider your bathroom scale. If on average the readings it gives are too high (or too low), the scale is biased. If it shows different readings when you step on it several times in quick succession, the scale is noisy. While bias is the average of errors, noise is their variability.
Although it is often ignored, noise is a large source of malfunction in society. In a 1981 study, for example, 208 federal judges were asked to determine the appropriate sentences for the same 16 cases. The cases were described by the characteristics of the offense (robbery or fraud, violent or not) and of the defendant (young or old, repeat or first-time offender, accomplice or principal). The average difference between the sentences that two randomly chosen judges gave for the same crime was more than 3.5 years. Considering that the mean sentence was seven years, that was a disconcerting amount of noise. Noise in real courtrooms is surely only worse, as actual cases are more complex and difficult to judge than stylized vignettes. It is hard to escape the conclusion that sentencing is in part a lottery, because the punishment can vary by many years depending on which judge is assigned to the case and on the judge’s state of mind on that day. The judicial system is unacceptably noisy.
Noise causes error, as does bias, but the two kinds of error are separate and independent. A company’s hiring decisions could be unbiased overall if some of its recruiters favor men and others favor women. However, its hiring decisions would be noisy, and the company would make many bad choices. Where does noise come from? There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions. Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.
A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.
Q. Which of the following can serve as an example of 'noise' as per the the passage?
Read the passage carefully and answer the questions that follow:
The word “bias” commonly appears in conversations about mistaken judgments and unfortunate decisions. We use it when there is discrimination, for instance against women or in favor of Ivy League graduates. But the meaning of the word is broader: A bias is any predictable error that inclines your judgment in a particular direction. For instance, we speak of bias when forecasts of sales are consistently optimistic or investment decisions overly cautious.
Society has devoted a lot of attention to the problem of bias — and rightly so. But when it comes to mistaken judgments and unfortunate decisions, there is another type of error that attracts far less attention: noise. To see the difference between bias and noise, consider your bathroom scale. If on average the readings it gives are too high (or too low), the scale is biased. If it shows different readings when you step on it several times in quick succession, the scale is noisy. While bias is the average of errors, noise is their variability.
Although it is often ignored, noise is a large source of malfunction in society. In a 1981 study, for example, 208 federal judges were asked to determine the appropriate sentences for the same 16 cases. The cases were described by the characteristics of the offense (robbery or fraud, violent or not) and of the defendant (young or old, repeat or first-time offender, accomplice or principal). The average difference between the sentences that two randomly chosen judges gave for the same crime was more than 3.5 years. Considering that the mean sentence was seven years, that was a disconcerting amount of noise. Noise in real courtrooms is surely only worse, as actual cases are more complex and difficult to judge than stylized vignettes. It is hard to escape the conclusion that sentencing is in part a lottery, because the punishment can vary by many years depending on which judge is assigned to the case and on the judge’s state of mind on that day. The judicial system is unacceptably noisy.
Noise causes error, as does bias, but the two kinds of error are separate and independent. A company’s hiring decisions could be unbiased overall if some of its recruiters favor men and others favor women. However, its hiring decisions would be noisy, and the company would make many bad choices. Where does noise come from? There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions. Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.
A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.
Q. According to the passage, noise in a judicial system could lead to which of the following consequences?
Read the passage carefully and answer the questions that follow:
The word “bias” commonly appears in conversations about mistaken judgments and unfortunate decisions. We use it when there is discrimination, for instance against women or in favor of Ivy League graduates. But the meaning of the word is broader: A bias is any predictable error that inclines your judgment in a particular direction. For instance, we speak of bias when forecasts of sales are consistently optimistic or investment decisions overly cautious.
Society has devoted a lot of attention to the problem of bias — and rightly so. But when it comes to mistaken judgments and unfortunate decisions, there is another type of error that attracts far less attention: noise. To see the difference between bias and noise, consider your bathroom scale. If on average the readings it gives are too high (or too low), the scale is biased. If it shows different readings when you step on it several times in quick succession, the scale is noisy. While bias is the average of errors, noise is their variability.
Although it is often ignored, noise is a large source of malfunction in society. In a 1981 study, for example, 208 federal judges were asked to determine the appropriate sentences for the same 16 cases. The cases were described by the characteristics of the offense (robbery or fraud, violent or not) and of the defendant (young or old, repeat or first-time offender, accomplice or principal). The average difference between the sentences that two randomly chosen judges gave for the same crime was more than 3.5 years. Considering that the mean sentence was seven years, that was a disconcerting amount of noise. Noise in real courtrooms is surely only worse, as actual cases are more complex and difficult to judge than stylized vignettes. It is hard to escape the conclusion that sentencing is in part a lottery, because the punishment can vary by many years depending on which judge is assigned to the case and on the judge’s state of mind on that day. The judicial system is unacceptably noisy.
Noise causes error, as does bias, but the two kinds of error are separate and independent. A company’s hiring decisions could be unbiased overall if some of its recruiters favor men and others favor women. However, its hiring decisions would be noisy, and the company would make many bad choices. Where does noise come from? There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions. Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.
A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.
Q. According to the passage, noise and bias differ in which of the following ways?
Read the passage carefully and answer the questions that follow:
Information has never been more accessible or less reliable. So we are advised to check our sources carefully. There is so much talk of “fake news” that the term has entirely lost meaning. At school, we are taught to avoid Wikipedia, or at the very least never admit to using it in our citations. And most sources on the world wide web have been built without the standardized attributions that scaffold other forms of knowledge dissemination; they are therefore seen as degraded, even as they illuminate.
But it was only relatively recently that academic disciplines designed rigid systems for categorizing and organizing source material at all. Historian Anthony Grafton traces the genealogy of the footnote in an excellent book, which reveals many origin stories. It turns out that footnotes are related to early systems of marginalia, glosses, and annotation that existed in theology, early histories, and Medieval law. The footnote in something like its modern form seems to have been devised in the seventeenth century, and has proliferated since, with increasing standardization and rigor. And yet, Grafton writes, “appearances of uniformity are deceptive. To the inexpert, footnotes look like deep root systems, solid and fixed; to the connoisseur, however, they reveal themselves as anthills, swarming with constructive and combative activity.”
The purpose of citation, broadly speaking, is to give others credit, but it does much more than that. Famously, citations can be the sources of great enmity — a quick dismissal of a rival argument with a “cf.” They can serve a social purpose, as sly thank-yous to friends and mentors. They can perform a kind of box-checking of requisite major works. (As Grafton points out, the omission of these works can itself be a statement.) Attribution, significantly, allows others to check your work, or at least gives the illusion that they could, following a web of sources back to the origins. But perhaps above all else, citations serve a dual purpose that seems at once complementary and conflicting; they acknowledge a debt to a larger body of work while also conferring on oneself a certain kind of erudition and expertise.
Like many systems that appear meticulous, the writing of citations is a subjective art. Never more so than in fiction, where citation is an entirely other kind of animal, not required or even expected, except in the “acknowledgments” page, which is often a who’s who of the publishing world. But in the last two decades, bibliographies and sources cited pages have increasingly cropped up in the backs of novels. “It’s terribly off-putting,” James Wood said of this fad in 2006. “It would be very odd if Thomas Hardy had put at the end of all his books, ‘I’m thankful to the Dorset County Chronicle for dialect books from the 18th century.’ We expect authors to do that work, and I don’t see why we should praise them for that work.” Wood has a point, or had one — at their worst, citations in fiction are annoying, driven by an author’s anxiety to show off what he has read, to check the right boxes.
Q. Which of the following is a reason why citation is done?
Read the passage carefully and answer the questions that follow:
Information has never been more accessible or less reliable. So we are advised to check our sources carefully. There is so much talk of “fake news” that the term has entirely lost meaning. At school, we are taught to avoid Wikipedia, or at the very least never admit to using it in our citations. And most sources on the world wide web have been built without the standardized attributions that scaffold other forms of knowledge dissemination; they are therefore seen as degraded, even as they illuminate.
But it was only relatively recently that academic disciplines designed rigid systems for categorizing and organizing source material at all. Historian Anthony Grafton traces the genealogy of the footnote in an excellent book, which reveals many origin stories. It turns out that footnotes are related to early systems of marginalia, glosses, and annotation that existed in theology, early histories, and Medieval law. The footnote in something like its modern form seems to have been devised in the seventeenth century, and has proliferated since, with increasing standardization and rigor. And yet, Grafton writes, “appearances of uniformity are deceptive. To the inexpert, footnotes look like deep root systems, solid and fixed; to the connoisseur, however, they reveal themselves as anthills, swarming with constructive and combative activity.”
The purpose of citation, broadly speaking, is to give others credit, but it does much more than that. Famously, citations can be the sources of great enmity — a quick dismissal of a rival argument with a “cf.” They can serve a social purpose, as sly thank-yous to friends and mentors. They can perform a kind of box-checking of requisite major works. (As Grafton points out, the omission of these works can itself be a statement.) Attribution, significantly, allows others to check your work, or at least gives the illusion that they could, following a web of sources back to the origins. But perhaps above all else, citations serve a dual purpose that seems at once complementary and conflicting; they acknowledge a debt to a larger body of work while also conferring on oneself a certain kind of erudition and expertise.
Like many systems that appear meticulous, the writing of citations is a subjective art. Never more so than in fiction, where citation is an entirely other kind of animal, not required or even expected, except in the “acknowledgments” page, which is often a who’s who of the publishing world. But in the last two decades, bibliographies and sources cited pages have increasingly cropped up in the backs of novels. “It’s terribly off-putting,” James Wood said of this fad in 2006. “It would be very odd if Thomas Hardy had put at the end of all his books, ‘I’m thankful to the Dorset County Chronicle for dialect books from the 18th century.’ We expect authors to do that work, and I don’t see why we should praise them for that work.” Wood has a point, or had one — at their worst, citations in fiction are annoying, driven by an author’s anxiety to show off what he has read, to check the right boxes.
Q. What can be inferred about the author's stance on including citations in works of fiction from the passage?
Read the passage carefully and answer the questions that follow:
Information has never been more accessible or less reliable. So we are advised to check our sources carefully. There is so much talk of “fake news” that the term has entirely lost meaning. At school, we are taught to avoid Wikipedia, or at the very least never admit to using it in our citations. And most sources on the world wide web have been built without the standardized attributions that scaffold other forms of knowledge dissemination; they are therefore seen as degraded, even as they illuminate.
But it was only relatively recently that academic disciplines designed rigid systems for categorizing and organizing source material at all. Historian Anthony Grafton traces the genealogy of the footnote in an excellent book, which reveals many origin stories. It turns out that footnotes are related to early systems of marginalia, glosses, and annotation that existed in theology, early histories, and Medieval law. The footnote in something like its modern form seems to have been devised in the seventeenth century, and has proliferated since, with increasing standardization and rigor. And yet, Grafton writes, “appearances of uniformity are deceptive. To the inexpert, footnotes look like deep root systems, solid and fixed; to the connoisseur, however, they reveal themselves as anthills, swarming with constructive and combative activity.”
The purpose of citation, broadly speaking, is to give others credit, but it does much more than that. Famously, citations can be the sources of great enmity — a quick dismissal of a rival argument with a “cf.” They can serve a social purpose, as sly thank-yous to friends and mentors. They can perform a kind of box-checking of requisite major works. (As Grafton points out, the omission of these works can itself be a statement.) Attribution, significantly, allows others to check your work, or at least gives the illusion that they could, following a web of sources back to the origins. But perhaps above all else, citations serve a dual purpose that seems at once complementary and conflicting; they acknowledge a debt to a larger body of work while also conferring on oneself a certain kind of erudition and expertise.
Like many systems that appear meticulous, the writing of citations is a subjective art. Never more so than in fiction, where citation is an entirely other kind of animal, not required or even expected, except in the “acknowledgments” page, which is often a who’s who of the publishing world. But in the last two decades, bibliographies and sources cited pages have increasingly cropped up in the backs of novels. “It’s terribly off-putting,” James Wood said of this fad in 2006. “It would be very odd if Thomas Hardy had put at the end of all his books, ‘I’m thankful to the Dorset County Chronicle for dialect books from the 18th century.’ We expect authors to do that work, and I don’t see why we should praise them for that work.” Wood has a point, or had one — at their worst, citations in fiction are annoying, driven by an author’s anxiety to show off what he has read, to check the right boxes.
Q. "Citations serve a dual purpose that seems at once complementary and conflicting." Which of the following best captures the reason why the author makes this statement?
Read the passage carefully and answer the questions that follow:
Information has never been more accessible or less reliable. So we are advised to check our sources carefully. There is so much talk of “fake news” that the term has entirely lost meaning. At school, we are taught to avoid Wikipedia, or at the very least never admit to using it in our citations. And most sources on the world wide web have been built without the standardized attributions that scaffold other forms of knowledge dissemination; they are therefore seen as degraded, even as they illuminate.
But it was only relatively recently that academic disciplines designed rigid systems for categorizing and organizing source material at all. Historian Anthony Grafton traces the genealogy of the footnote in an excellent book, which reveals many origin stories. It turns out that footnotes are related to early systems of marginalia, glosses, and annotation that existed in theology, early histories, and Medieval law. The footnote in something like its modern form seems to have been devised in the seventeenth century, and has proliferated since, with increasing standardization and rigor. And yet, Grafton writes, “appearances of uniformity are deceptive. To the inexpert, footnotes look like deep root systems, solid and fixed; to the connoisseur, however, they reveal themselves as anthills, swarming with constructive and combative activity.”
The purpose of citation, broadly speaking, is to give others credit, but it does much more than that. Famously, citations can be the sources of great enmity — a quick dismissal of a rival argument with a “cf.” They can serve a social purpose, as sly thank-yous to friends and mentors. They can perform a kind of box-checking of requisite major works. (As Grafton points out, the omission of these works can itself be a statement.) Attribution, significantly, allows others to check your work, or at least gives the illusion that they could, following a web of sources back to the origins. But perhaps above all else, citations serve a dual purpose that seems at once complementary and conflicting; they acknowledge a debt to a larger body of work while also conferring on oneself a certain kind of erudition and expertise.
Like many systems that appear meticulous, the writing of citations is a subjective art. Never more so than in fiction, where citation is an entirely other kind of animal, not required or even expected, except in the “acknowledgments” page, which is often a who’s who of the publishing world. But in the last two decades, bibliographies and sources cited pages have increasingly cropped up in the backs of novels. “It’s terribly off-putting,” James Wood said of this fad in 2006. “It would be very odd if Thomas Hardy had put at the end of all his books, ‘I’m thankful to the Dorset County Chronicle for dialect books from the 18th century.’ We expect authors to do that work, and I don’t see why we should praise them for that work.” Wood has a point, or had one — at their worst, citations in fiction are annoying, driven by an author’s anxiety to show off what he has read, to check the right boxes.
Q. Which of the following statements about footnotes can be inferred from the second paragraph?
I. According to Grafton, inexperts view footnotes as an immutable system with a singular purpose.
II. Footnotes, in their modern form, have attained a higher degree of standardization and rigour.
III. According to Grafton, experts view footnotes as a system that brews both beneficial and confrontational activities.
IV. Footnotes were an integral feature of Medieval literature, albeit in a form different from modern forms.
Read the passage carefully and answer the questions that follow:
Humiliation is more than an individual and subjective feeling. It is an instrument of political power, wielded with intent. In the late 1930s, Soviet show trials used every means to degrade anyone whom Stalin considered a potentially dangerous opponent. National Socialism copied this practice whenever it put ‘enemies of the people’ on trial. On the streets of Vienna in 1938, officials forced Jews to kneel on the pavement and scrub off anti-Nazi graffiti to the laughter of non-Jewish men, women and children. During the Cultural Revolution in China, young activists went out of their way to relentlessly humiliate senior functionaries - a common practice that, to this day, hasn’t been officially reprimanded or rectified.
Liberal democracies, especially after the Second World War, have taken issue with these practices. We like to believe that we have largely eradicated such politics from our societies. Compared with totalitarian regimes of the 20th century, this belief might seem justified. Yet we’re still a far cry from being ‘decent societies’ whose members and institutions, in the philosopher Avishai Margalit’s terms, ‘do not humiliate people’, but respect their dignity. Although construction of the road to decency began as early as around 1800, it was - and remains - paved with obstacles and exceptions.
Mass opposition to the politics of humiliation began from the early 19th century in Europe, as lower-class people increasingly objected to disrespectful treatment. Servants, journeymen and factory workers alike used the language of honour and concepts of personal and social self-worth - previously monopolised by the nobility and upper-middle classes - to demand that they not be verbally and physically insulted by employers and overseers.
This social change was enabled and supported by a new type of honour that followed the invention of ‘citizens’ (rather than subjects) in democratising societies. Citizens who carried political rights and duties were also seen as possessing civic honour. Traditionally, social honour had been stratified according to status and rank, but now civic honour pertained to each and every citizen, and this helped to raise their self-esteem and self-consciousness. Consequently, humiliation, and other demonstrations of the alleged inferiority of others, was no longer considered a legitimate means by which to exert power over one’s fellow citizens.
Historically then, humiliation could be felt - and objected to - only once the notion of equal citizenship and human dignity entered political discourse and practice. As long as society subscribed to the notion that some individuals are fundamentally superior to others, people had a hard time feeling humiliated. They might feel treated unfairly, and rebel. But they wouldn’t perceive such treatment as humiliating, per se. Humiliation can be experienced only when the victims consider themselves on a par with the perpetrator - not in terms of actual power, but in terms of rights and dignity. This explains the surge of libel suits in Europe during the 19th century: they reflected the democratised sense of honour in societies that had granted and institutionalised equal rights after the French Revolution (even in countries that didn’t have a revolution).
The evolution of the legal system in Western nations serves as both a gauge of, and an active participant in, these developments. From the Middle Ages to the early 19th century, public shaming was used widely as a supplementary punishment for men and women sentenced for unlawful acts.
Q. Which of the following is true based on the passage?
Read the passage carefully and answer the questions that follow:
Humiliation is more than an individual and subjective feeling. It is an instrument of political power, wielded with intent. In the late 1930s, Soviet show trials used every means to degrade anyone whom Stalin considered a potentially dangerous opponent. National Socialism copied this practice whenever it put ‘enemies of the people’ on trial. On the streets of Vienna in 1938, officials forced Jews to kneel on the pavement and scrub off anti-Nazi graffiti to the laughter of non-Jewish men, women and children. During the Cultural Revolution in China, young activists went out of their way to relentlessly humiliate senior functionaries - a common practice that, to this day, hasn’t been officially reprimanded or rectified.
Liberal democracies, especially after the Second World War, have taken issue with these practices. We like to believe that we have largely eradicated such politics from our societies. Compared with totalitarian regimes of the 20th century, this belief might seem justified. Yet we’re still a far cry from being ‘decent societies’ whose members and institutions, in the philosopher Avishai Margalit’s terms, ‘do not humiliate people’, but respect their dignity. Although construction of the road to decency began as early as around 1800, it was - and remains - paved with obstacles and exceptions.
Mass opposition to the politics of humiliation began from the early 19th century in Europe, as lower-class people increasingly objected to disrespectful treatment. Servants, journeymen and factory workers alike used the language of honour and concepts of personal and social self-worth - previously monopolised by the nobility and upper-middle classes - to demand that they not be verbally and physically insulted by employers and overseers.
This social change was enabled and supported by a new type of honour that followed the invention of ‘citizens’ (rather than subjects) in democratising societies. Citizens who carried political rights and duties were also seen as possessing civic honour. Traditionally, social honour had been stratified according to status and rank, but now civic honour pertained to each and every citizen, and this helped to raise their self-esteem and self-consciousness. Consequently, humiliation, and other demonstrations of the alleged inferiority of others, was no longer considered a legitimate means by which to exert power over one’s fellow citizens.
Historically then, humiliation could be felt - and objected to - only once the notion of equal citizenship and human dignity entered political discourse and practice. As long as society subscribed to the notion that some individuals are fundamentally superior to others, people had a hard time feeling humiliated. They might feel treated unfairly, and rebel. But they wouldn’t perceive such treatment as humiliating, per se. Humiliation can be experienced only when the victims consider themselves on a par with the perpetrator - not in terms of actual power, but in terms of rights and dignity. This explains the surge of libel suits in Europe during the 19th century: they reflected the democratised sense of honour in societies that had granted and institutionalised equal rights after the French Revolution (even in countries that didn’t have a revolution).
The evolution of the legal system in Western nations serves as both a gauge of, and an active participant in, these developments. From the Middle Ages to the early 19th century, public shaming was used widely as a supplementary punishment for men and women sentenced for unlawful acts.
Q. Why does the author feel that humiliation could be felt only after the entrance of the notion of equal citizenship and human dignity in political discourse?
Read the passage carefully and answer the questions that follow:
Humiliation is more than an individual and subjective feeling. It is an instrument of political power, wielded with intent. In the late 1930s, Soviet show trials used every means to degrade anyone whom Stalin considered a potentially dangerous opponent. National Socialism copied this practice whenever it put ‘enemies of the people’ on trial. On the streets of Vienna in 1938, officials forced Jews to kneel on the pavement and scrub off anti-Nazi graffiti to the laughter of non-Jewish men, women and children. During the Cultural Revolution in China, young activists went out of their way to relentlessly humiliate senior functionaries - a common practice that, to this day, hasn’t been officially reprimanded or rectified.
Liberal democracies, especially after the Second World War, have taken issue with these practices. We like to believe that we have largely eradicated such politics from our societies. Compared with totalitarian regimes of the 20th century, this belief might seem justified. Yet we’re still a far cry from being ‘decent societies’ whose members and institutions, in the philosopher Avishai Margalit’s terms, ‘do not humiliate people’, but respect their dignity. Although construction of the road to decency began as early as around 1800, it was - and remains - paved with obstacles and exceptions.
Mass opposition to the politics of humiliation began from the early 19th century in Europe, as lower-class people increasingly objected to disrespectful treatment. Servants, journeymen and factory workers alike used the language of honour and concepts of personal and social self-worth - previously monopolised by the nobility and upper-middle classes - to demand that they not be verbally and physically insulted by employers and overseers.
This social change was enabled and supported by a new type of honour that followed the invention of ‘citizens’ (rather than subjects) in democratising societies. Citizens who carried political rights and duties were also seen as possessing civic honour. Traditionally, social honour had been stratified according to status and rank, but now civic honour pertained to each and every citizen, and this helped to raise their self-esteem and self-consciousness. Consequently, humiliation, and other demonstrations of the alleged inferiority of others, was no longer considered a legitimate means by which to exert power over one’s fellow citizens.
Historically then, humiliation could be felt - and objected to - only once the notion of equal citizenship and human dignity entered political discourse and practice. As long as society subscribed to the notion that some individuals are fundamentally superior to others, people had a hard time feeling humiliated. They might feel treated unfairly, and rebel. But they wouldn’t perceive such treatment as humiliating, per se. Humiliation can be experienced only when the victims consider themselves on a par with the perpetrator - not in terms of actual power, but in terms of rights and dignity. This explains the surge of libel suits in Europe during the 19th century: they reflected the democratised sense of honour in societies that had granted and institutionalised equal rights after the French Revolution (even in countries that didn’t have a revolution).
The evolution of the legal system in Western nations serves as both a gauge of, and an active participant in, these developments. From the Middle Ages to the early 19th century, public shaming was used widely as a supplementary punishment for men and women sentenced for unlawful acts.
Q. Which of the following topics would be a likely continuation of the given discussion?
Read the passage carefully and answer the questions that follow:
Humiliation is more than an individual and subjective feeling. It is an instrument of political power, wielded with intent. In the late 1930s, Soviet show trials used every means to degrade anyone whom Stalin considered a potentially dangerous opponent. National Socialism copied this practice whenever it put ‘enemies of the people’ on trial. On the streets of Vienna in 1938, officials forced Jews to kneel on the pavement and scrub off anti-Nazi graffiti to the laughter of non-Jewish men, women and children. During the Cultural Revolution in China, young activists went out of their way to relentlessly humiliate senior functionaries - a common practice that, to this day, hasn’t been officially reprimanded or rectified.
Liberal democracies, especially after the Second World War, have taken issue with these practices. We like to believe that we have largely eradicated such politics from our societies. Compared with totalitarian regimes of the 20th century, this belief might seem justified. Yet we’re still a far cry from being ‘decent societies’ whose members and institutions, in the philosopher Avishai Margalit’s terms, ‘do not humiliate people’, but respect their dignity. Although construction of the road to decency began as early as around 1800, it was - and remains - paved with obstacles and exceptions.
Mass opposition to the politics of humiliation began from the early 19th century in Europe, as lower-class people increasingly objected to disrespectful treatment. Servants, journeymen and factory workers alike used the language of honour and concepts of personal and social self-worth - previously monopolised by the nobility and upper-middle classes - to demand that they not be verbally and physically insulted by employers and overseers.
This social change was enabled and supported by a new type of honour that followed the invention of ‘citizens’ (rather than subjects) in democratising societies. Citizens who carried political rights and duties were also seen as possessing civic honour. Traditionally, social honour had been stratified according to status and rank, but now civic honour pertained to each and every citizen, and this helped to raise their self-esteem and self-consciousness. Consequently, humiliation, and other demonstrations of the alleged inferiority of others, was no longer considered a legitimate means by which to exert power over one’s fellow citizens.
Historically then, humiliation could be felt - and objected to - only once the notion of equal citizenship and human dignity entered political discourse and practice. As long as society subscribed to the notion that some individuals are fundamentally superior to others, people had a hard time feeling humiliated. They might feel treated unfairly, and rebel. But they wouldn’t perceive such treatment as humiliating, per se. Humiliation can be experienced only when the victims consider themselves on a par with the perpetrator - not in terms of actual power, but in terms of rights and dignity. This explains the surge of libel suits in Europe during the 19th century: they reflected the democratised sense of honour in societies that had granted and institutionalised equal rights after the French Revolution (even in countries that didn’t have a revolution).
The evolution of the legal system in Western nations serves as both a gauge of, and an active participant in, these developments. From the Middle Ages to the early 19th century, public shaming was used widely as a supplementary punishment for men and women sentenced for unlawful acts.
Q. Why does the author cite the example of the Soviet, National Socialism and the Cultural Revolution in China?
Read the passage carefully and answer the questions that follow:
Humans are strange. For a global species, we’re not particularly genetically diverse, thanks in part to how our ancient roaming explorations caused “founder effects” and “bottleneck events” that restricted our ancestral gene pool. We also have a truly outsize impact on the planetary environment without much in the way of natural attrition to trim our influence.
But the strangest thing of all is how we generate, exploit, and propagate information that is not encoded in our heritable genetic material, yet travels with us through time and space. Not only is much of that information represented in purely symbolic forms—alphabets, languages, binary codes—it is also represented in each brick, alloy, machine, and structure we build from the materials around us. Even the symbolic stuff is instantiated in some material form or the other, whether as ink on pages or electrical charges in nanoscale pieces of silicon. Altogether, this “dataome” has become an integral part of our existence. In fact, it may have always been an integral, and essential, part of our existence since our species of hominins became more and more distinct some 200,000 years ago.
For example, let’s consider our planetary impact. Today we can look at our species’ energy use and see that of the roughly six to seven terawatts of average global electricity production, about 3 percent to 4 percent is gobbled up by our digital electronics, in computing, storing and moving information. That might not sound too bad—except the growth trend of our digitized informational world is such that it requires approximately 40 percent more power every year. Even allowing for improvements in computational efficiency and power generation, this points to a world in some 20 years where all of the energy we currently generate in electricity will be consumed by digital electronics alone.
And that’s just one facet of the energy demands of the human dataome. We still print onto paper, and the energy cost of a single page is the equivalent of burning five grams of high-quality coal. Digital devices, from microprocessors to hard drives, are also extraordinarily demanding in terms of their production, owing to the deep repurposing of matter that is required. We literally fight against the second law of thermodynamics to forge these exquisitely ordered, restricted, low-entropy structures out of raw materials that are decidedly high-entropy in their messy natural states. It is hard to see where this informational tsunami slows or ends.
Our dataome looks like a distinct, although entirely symbiotic phenomenon. Homo sapiens arguably only exists as a truly unique species because of our coevolution with a wealth of externalized information; starting from languages held only in neuronal structures through many generations, to our tools and abstractions on pottery and cave walls, all the way to today’s online world.
But symbiosis implies that all parties have their own interests to consider as well. Seeing ourselves this way opens the door to asking whether we’re calling all the shots. After all, in a gene-centered view of biology, all living things are simply temporary vehicles for the propagation and survival of information. In that sense the dataome is no different, and exactly how information survives is less important than the fact that it can do so. Once that information and its algorithmic underpinnings are in place in the world, it will keep going forever if it can.
Q. The author calls humans 'strange' for all of the following reasons, EXCEPT
Read the passage carefully and answer the questions that follow:
Humans are strange. For a global species, we’re not particularly genetically diverse, thanks in part to how our ancient roaming explorations caused “founder effects” and “bottleneck events” that restricted our ancestral gene pool. We also have a truly outsize impact on the planetary environment without much in the way of natural attrition to trim our influence.
But the strangest thing of all is how we generate, exploit, and propagate information that is not encoded in our heritable genetic material, yet travels with us through time and space. Not only is much of that information represented in purely symbolic forms—alphabets, languages, binary codes—it is also represented in each brick, alloy, machine, and structure we build from the materials around us. Even the symbolic stuff is instantiated in some material form or the other, whether as ink on pages or electrical charges in nanoscale pieces of silicon. Altogether, this “dataome” has become an integral part of our existence. In fact, it may have always been an integral, and essential, part of our existence since our species of hominins became more and more distinct some 200,000 years ago.
For example, let’s consider our planetary impact. Today we can look at our species’ energy use and see that of the roughly six to seven terawatts of average global electricity production, about 3 percent to 4 percent is gobbled up by our digital electronics, in computing, storing and moving information. That might not sound too bad—except the growth trend of our digitized informational world is such that it requires approximately 40 percent more power every year. Even allowing for improvements in computational efficiency and power generation, this points to a world in some 20 years where all of the energy we currently generate in electricity will be consumed by digital electronics alone.
And that’s just one facet of the energy demands of the human dataome. We still print onto paper, and the energy cost of a single page is the equivalent of burning five grams of high-quality coal. Digital devices, from microprocessors to hard drives, are also extraordinarily demanding in terms of their production, owing to the deep repurposing of matter that is required. We literally fight against the second law of thermodynamics to forge these exquisitely ordered, restricted, low-entropy structures out of raw materials that are decidedly high-entropy in their messy natural states. It is hard to see where this informational tsunami slows or ends.
Our dataome looks like a distinct, although entirely symbiotic phenomenon. Homo sapiens arguably only exists as a truly unique species because of our coevolution with a wealth of externalized information; starting from languages held only in neuronal structures through many generations, to our tools and abstractions on pottery and cave walls, all the way to today’s online world.
But symbiosis implies that all parties have their own interests to consider as well. Seeing ourselves this way opens the door to asking whether we’re calling all the shots. After all, in a gene-centered view of biology, all living things are simply temporary vehicles for the propagation and survival of information. In that sense the dataome is no different, and exactly how information survives is less important than the fact that it can do so. Once that information and its algorithmic underpinnings are in place in the world, it will keep going forever if it can.
Q. According to the author, which of the following reason makes humans a truly unique species?
Read the passage carefully and answer the questions that follow:
Humans are strange. For a global species, we’re not particularly genetically diverse, thanks in part to how our ancient roaming explorations caused “founder effects” and “bottleneck events” that restricted our ancestral gene pool. We also have a truly outsize impact on the planetary environment without much in the way of natural attrition to trim our influence.
But the strangest thing of all is how we generate, exploit, and propagate information that is not encoded in our heritable genetic material, yet travels with us through time and space. Not only is much of that information represented in purely symbolic forms—alphabets, languages, binary codes—it is also represented in each brick, alloy, machine, and structure we build from the materials around us. Even the symbolic stuff is instantiated in some material form or the other, whether as ink on pages or electrical charges in nanoscale pieces of silicon. Altogether, this “dataome” has become an integral part of our existence. In fact, it may have always been an integral, and essential, part of our existence since our species of hominins became more and more distinct some 200,000 years ago.
For example, let’s consider our planetary impact. Today we can look at our species’ energy use and see that of the roughly six to seven terawatts of average global electricity production, about 3 percent to 4 percent is gobbled up by our digital electronics, in computing, storing and moving information. That might not sound too bad—except the growth trend of our digitized informational world is such that it requires approximately 40 percent more power every year. Even allowing for improvements in computational efficiency and power generation, this points to a world in some 20 years where all of the energy we currently generate in electricity will be consumed by digital electronics alone.
And that’s just one facet of the energy demands of the human dataome. We still print onto paper, and the energy cost of a single page is the equivalent of burning five grams of high-quality coal. Digital devices, from microprocessors to hard drives, are also extraordinarily demanding in terms of their production, owing to the deep repurposing of matter that is required. We literally fight against the second law of thermodynamics to forge these exquisitely ordered, restricted, low-entropy structures out of raw materials that are decidedly high-entropy in their messy natural states. It is hard to see where this informational tsunami slows or ends.
Our dataome looks like a distinct, although entirely symbiotic phenomenon. Homo sapiens arguably only exists as a truly unique species because of our coevolution with a wealth of externalized information; starting from languages held only in neuronal structures through many generations, to our tools and abstractions on pottery and cave walls, all the way to today’s online world.
But symbiosis implies that all parties have their own interests to consider as well. Seeing ourselves this way opens the door to asking whether we’re calling all the shots. After all, in a gene-centered view of biology, all living things are simply temporary vehicles for the propagation and survival of information. In that sense the dataome is no different, and exactly how information survives is less important than the fact that it can do so. Once that information and its algorithmic underpinnings are in place in the world, it will keep going forever if it can.
Q. Which of the following best captures the central idea discussed in the last paragraph?