DIRECTIONS for the question: Read the passage and answer the question based on it.
Wherever I turn, the popular media, scientists and even fellow philosophers are telling me that I’m a machine or a beast. My ethics can be illuminated by the behavior of termites. My brain is a sloppy computer with a flicker of consciousness and the illusion of free will. I’m anything but human. While it would take more time and space than I have here to refute these views, I’d like to suggest why I stubbornly continue to believe that I’m a human being — something more than other animals, and essentially more than any computer.
Let’s begin with ethics. Many organisms carry genes that promote behavior that benefits other organisms. The classic example is ants: every individual insect is ready to sacrifice itself for the colony. As Edward O. Wilson explained in a recent essay, some biologists account for self-sacrificing behavior by the theory of kin selection, while Wilson and others favor group selection. Selection also operates between individuals: “within groups selfish individuals beat altruistic individuals, but groups of altruists beat groups of selfish individuals. Or, risking oversimplification, individual selection promoted sin, while group selection promoted virtue.” Wilson is cautious here, but some “evolutionary ethicists” don’t hesitate to claim that all we need in order to understand human virtue is the right explanation — whatever it may be — of how altruistic behavior evolved.
I have no beef with entomology or evolution, but I refuse to admit that they teach me much about ethics. Consider the fact that human action ranges to the extremes. People can perform extraordinary acts of altruism, including kindness toward other species — or they can utterly fail to be altruistic, even toward their own children. So whatever tendencies we may have inherited leave ample room for variation; our choices will determine which end of the spectrum we approach. This is where ethical discourse comes in — not in explaining how we’re “built,” but in deliberating on our own future acts. Should I cheat on this test? Should I give this stranger a ride? Knowing how my selfish and altruistic feelings evolved doesn’t help me decide at all. Most, though not all, moral codes advise me to cultivate altruism. But since the human race has evolved to be capable of a wide range of both selfish and altruistic behavior, there is no reason to say that altruism is superior to selfishness in any biological sense.
In fact, the very idea of an “ought” is foreign to evolutionary theory. It makes no sense for a biologist to say that some particular animal should be more cooperative, much less to claim that an entire species ought to aim for some degree of altruism. If we decide that we should neither “dissolve society” through extreme selfishness, as Wilson puts it, nor become “angelic robots” like ants, we are making an ethical judgment, not a biological one. Likewise, from a biological perspective it has no significance to claim that I should be more generous than I usually am, or that a tyrant ought to be deposed and tried. In short, a purely evolutionary ethics makes ethical discourse meaningless.
Some might draw the self-contradictory conclusion that we ought to drop the word “ought.” I prefer to conclude that ants are anything but human. They may feel pain and pleasure, which are the first glimmerings of purpose, but they’re nowhere near human (much less angelic) goodness. Whether we’re talking about ants, wolves, or naked mole rats, cooperative animal behavior is not human virtue. Any understanding of human good and evil has to deal with phenomena that biology ignores or tries to explain away — such as decency, self-respect, integrity, honor, loyalty or justice. These matters are debatable and uncertain — maybe permanently so. But that’s a far cry from being meaningless.
Q. The author of the passage:
DIRECTIONS for the question: Read the passage and answer the question based on it.
Wherever I turn, the popular media, scientists and even fellow philosophers are telling me that I’m a machine or a beast. My ethics can be illuminated by the behavior of termites. My brain is a sloppy computer with a flicker of consciousness and the illusion of free will. I’m anything but human. While it would take more time and space than I have here to refute these views, I’d like to suggest why I stubbornly continue to believe that I’m a human being — something more than other animals, and essentially more than any computer.
Let’s begin with ethics. Many organisms carry genes that promote behavior that benefits other organisms. The classic example is ants: every individual insect is ready to sacrifice itself for the colony. As Edward O. Wilson explained in a recent essay, some biologists account for self-sacrificing behavior by the theory of kin selection, while Wilson and others favor group selection. Selection also operates between individuals: “within groups selfish individuals beat altruistic individuals, but groups of altruists beat groups of selfish individuals. Or, risking oversimplification, individual selection promoted sin, while group selection promoted virtue.” Wilson is cautious here, but some “evolutionary ethicists” don’t hesitate to claim that all we need in order to understand human virtue is the right explanation — whatever it may be — of how altruistic behavior evolved.
I have no beef with entomology or evolution, but I refuse to admit that they teach me much about ethics. Consider the fact that human action ranges to the extremes. People can perform extraordinary acts of altruism, including kindness toward other species — or they can utterly fail to be altruistic, even toward their own children. So whatever tendencies we may have inherited leave ample room for variation; our choices will determine which end of the spectrum we approach. This is where ethical discourse comes in — not in explaining how we’re “built,” but in deliberating on our own future acts. Should I cheat on this test? Should I give this stranger a ride? Knowing how my selfish and altruistic feelings evolved doesn’t help me decide at all. Most, though not all, moral codes advise me to cultivate altruism. But since the human race has evolved to be capable of a wide range of both selfish and altruistic behavior, there is no reason to say that altruism is superior to selfishness in any biological sense.
In fact, the very idea of an “ought” is foreign to evolutionary theory. It makes no sense for a biologist to say that some particular animal should be more cooperative, much less to claim that an entire species ought to aim for some degree of altruism. If we decide that we should neither “dissolve society” through extreme selfishness, as Wilson puts it, nor become “angelic robots” like ants, we are making an ethical judgment, not a biological one. Likewise, from a biological perspective it has no significance to claim that I should be more generous than I usually am, or that a tyrant ought to be deposed and tried. In short, a purely evolutionary ethics makes ethical discourse meaningless.
Some might draw the self-contradictory conclusion that we ought to drop the word “ought.” I prefer to conclude that ants are anything but human. They may feel pain and pleasure, which are the first glimmerings of purpose, but they’re nowhere near human (much less angelic) goodness. Whether we’re talking about ants, wolves, or naked mole rats, cooperative animal behavior is not human virtue. Any understanding of human good and evil has to deal with phenomena that biology ignores or tries to explain away — such as decency, self-respect, integrity, honor, loyalty or justice. These matters are debatable and uncertain — maybe permanently so. But that’s a far cry from being meaningless.
Q. The author of the passage is most likely to agree with the statement:
1 Crore+ students have signed up on EduRev. Have you? Download the App |
DIRECTIONS for the question: Read the passage and answer the question based on it.
Wherever I turn, the popular media, scientists and even fellow philosophers are telling me that I’m a machine or a beast. My ethics can be illuminated by the behavior of termites. My brain is a sloppy computer with a flicker of consciousness and the illusion of free will. I’m anything but human. While it would take more time and space than I have here to refute these views, I’d like to suggest why I stubbornly continue to believe that I’m a human being — something more than other animals, and essentially more than any computer.
Let’s begin with ethics. Many organisms carry genes that promote behavior that benefits other organisms. The classic example is ants: every individual insect is ready to sacrifice itself for the colony. As Edward O. Wilson explained in a recent essay, some biologists account for self-sacrificing behavior by the theory of kin selection, while Wilson and others favor group selection. Selection also operates between individuals: “within groups selfish individuals beat altruistic individuals, but groups of altruists beat groups of selfish individuals. Or, risking oversimplification, individual selection promoted sin, while group selection promoted virtue.” Wilson is cautious here, but some “evolutionary ethicists” don’t hesitate to claim that all we need in order to understand human virtue is the right explanation — whatever it may be — of how altruistic behavior evolved.
I have no beef with entomology or evolution, but I refuse to admit that they teach me much about ethics. Consider the fact that human action ranges to the extremes. People can perform extraordinary acts of altruism, including kindness toward other species — or they can utterly fail to be altruistic, even toward their own children. So whatever tendencies we may have inherited leave ample room for variation; our choices will determine which end of the spectrum we approach. This is where ethical discourse comes in — not in explaining how we’re “built,” but in deliberating on our own future acts. Should I cheat on this test? Should I give this stranger a ride? Knowing how my selfish and altruistic feelings evolved doesn’t help me decide at all. Most, though not all, moral codes advise me to cultivate altruism. But since the human race has evolved to be capable of a wide range of both selfish and altruistic behavior, there is no reason to say that altruism is superior to selfishness in any biological sense.
In fact, the very idea of an “ought” is foreign to evolutionary theory. It makes no sense for a biologist to say that some particular animal should be more cooperative, much less to claim that an entire species ought to aim for some degree of altruism. If we decide that we should neither “dissolve society” through extreme selfishness, as Wilson puts it, nor become “angelic robots” like ants, we are making an ethical judgment, not a biological one. Likewise, from a biological perspective it has no significance to claim that I should be more generous than I usually am, or that a tyrant ought to be deposed and tried. In short, a purely evolutionary ethics makes ethical discourse meaningless.
Some might draw the self-contradictory conclusion that we ought to drop the word “ought.” I prefer to conclude that ants are anything but human. They may feel pain and pleasure, which are the first glimmerings of purpose, but they’re nowhere near human (much less angelic) goodness. Whether we’re talking about ants, wolves, or naked mole rats, cooperative animal behavior is not human virtue. Any understanding of human good and evil has to deal with phenomena that biology ignores or tries to explain away — such as decency, self-respect, integrity, honor, loyalty or justice. These matters are debatable and uncertain — maybe permanently so. But that’s a far cry from being meaningless.
Q. What does the author mean when he says that “a purely evolutionary ethics makes ethical discourse meaningless”?
DIRECTIONS for the question: Read the passage and answer the question based on it.
Wherever I turn, the popular media, scientists and even fellow philosophers are telling me that I’m a machine or a beast. My ethics can be illuminated by the behavior of termites. My brain is a sloppy computer with a flicker of consciousness and the illusion of free will. I’m anything but human. While it would take more time and space than I have here to refute these views, I’d like to suggest why I stubbornly continue to believe that I’m a human being — something more than other animals, and essentially more than any computer.
Let’s begin with ethics. Many organisms carry genes that promote behavior that benefits other organisms. The classic example is ants: every individual insect is ready to sacrifice itself for the colony. As Edward O. Wilson explained in a recent essay, some biologists account for self-sacrificing behavior by the theory of kin selection, while Wilson and others favor group selection. Selection also operates between individuals: “within groups selfish individuals beat altruistic individuals, but groups of altruists beat groups of selfish individuals. Or, risking oversimplification, individual selection promoted sin, while group selection promoted virtue.” Wilson is cautious here, but some “evolutionary ethicists” don’t hesitate to claim that all we need in order to understand human virtue is the right explanation — whatever it may be — of how altruistic behavior evolved.
I have no beef with entomology or evolution, but I refuse to admit that they teach me much about ethics. Consider the fact that human action ranges to the extremes. People can perform extraordinary acts of altruism, including kindness toward other species — or they can utterly fail to be altruistic, even toward their own children. So whatever tendencies we may have inherited leave ample room for variation; our choices will determine which end of the spectrum we approach. This is where ethical discourse comes in — not in explaining how we’re “built,” but in deliberating on our own future acts. Should I cheat on this test? Should I give this stranger a ride? Knowing how my selfish and altruistic feelings evolved doesn’t help me decide at all. Most, though not all, moral codes advise me to cultivate altruism. But since the human race has evolved to be capable of a wide range of both selfish and altruistic behavior, there is no reason to say that altruism is superior to selfishness in any biological sense.
In fact, the very idea of an “ought” is foreign to evolutionary theory. It makes no sense for a biologist to say that some particular animal should be more cooperative, much less to claim that an entire species ought to aim for some degree of altruism. If we decide that we should neither “dissolve society” through extreme selfishness, as Wilson puts it, nor become “angelic robots” like ants, we are making an ethical judgment, not a biological one. Likewise, from a biological perspective it has no significance to claim that I should be more generous than I usually am, or that a tyrant ought to be deposed and tried. In short, a purely evolutionary ethics makes ethical discourse meaningless.
Some might draw the self-contradictory conclusion that we ought to drop the word “ought.” I prefer to conclude that ants are anything but human. They may feel pain and pleasure, which are the first glimmerings of purpose, but they’re nowhere near human (much less angelic) goodness. Whether we’re talking about ants, wolves, or naked mole rats, cooperative animal behavior is not human virtue. Any understanding of human good and evil has to deal with phenomena that biology ignores or tries to explain away — such as decency, self-respect, integrity, honor, loyalty or justice. These matters are debatable and uncertain — maybe permanently so. But that’s a far cry from being meaningless.
Q. Identify the option that does not represent a tone or attitude maintained by the author of the passage.
DIRECTIONS for the question: Read the passage and answer the question based on it.
Leading innovation is not about creating a vision, and inspiring others to execute it. But what do we mean by innovation? An innovation is anything that is both new and useful. Many of you have seen a Pixar movie, but very few of you would recognize Ed Catmull, the founder and CEO of Pixar. It took Ed and his colleagues nearly 20 years to create the first full-length C.G. movie. In the 20 years hence, they've produced 14 movies. When many of us think about innovation, though, we think about an Einstein having an 'Aha!' moment. But innovation is not about solo genius, it's about collective genius. To make a Pixar movie takes about 250 people four to five years.
What we know is, at the heart of innovation is a paradox. You have to unleash the talents and passions of many people and you have to harness them into a work that is actually useful. Innovative organizations are communities that have three capabilities: creative abrasion, creative agility and creative resolution.
Creative abrasion is about being able to create a marketplace of ideas through debate and discourse. Individuals in innovative organizations learn how to inquire, they learn how to actively listen, they also learn how to advocate for their point of view.
Creative agility is about being able to test and refine that portfolio of ideas through quick pursuit, reflection and adjustment. It's about discovery-driven learning where you act, as opposed to plan, your way to the future. It's about running a series of experiments, and not a series of pilots. Experiments are usually about learning. When you get a negative outcome, you're still really learning. Pilots are often about being right. When they don't work, someone or something is to blame.
The final capability is creative resolution. This is about doing decision making in a way that you can actually combine even opposing ideas to reconfigure them in new combinations to produce a solution that is new and useful. When you look at innovative organizations, they never go along to get along. They have developed a rather patient and inclusive decision making process that allows for both/and solutions to arise and not simply either/or solutions.
The infrastructure group of Google is the group that has to keep the website up and running 24/7. When Google was about to introduce Gmail and YouTube, they knew that their data storage system wasn't adequate. Bill Coughran and his leadership team had to figure out what to do about this situation. Instead of creating a group to tackle this task, they decided to allow groups to emerge spontaneously around different alternatives. Two groups coalesced. Big Table proposed that they build on the current system. Build It From Scratch proposed that it was time for a whole new system.
Early on, the teams were encouraged to build prototypes so that they could "bump them up against reality and discover for themselves the strengths and weaknesses of their particular approach." One of the engineers went to Bill and said, "We're all too busy for this inefficient system of running parallel experiments." But as the process unfolded, he began to understand the wisdom He admitted, "If you had forced us to all be on one team, we might have focused on proving who was right, and winning, and not on learning and discovering what was the best answer for Google."
We studied a general counsel in a pharmaceutical company who had to figure out how to get the outside lawyers, 19 competitors, to collaborate and innovate. We also studied Vineet Nayar at HCL Technologies. At HCL technologies the leaders had learned to see their role as setting direction and making sure that no one deviated from it. Vineet inverted the pyramid so that he could unleash the power of the many by loosening the stranglehold of the few.
Q. Why does the author consider the process of innovation paradoxical?
DIRECTIONS for the question: Read the passage and answer the question based on it.
Leading innovation is not about creating a vision, and inspiring others to execute it. But what do we mean by innovation? An innovation is anything that is both new and useful. Many of you have seen a Pixar movie, but very few of you would recognize Ed Catmull, the founder and CEO of Pixar. It took Ed and his colleagues nearly 20 years to create the first full-length C.G. movie. In the 20 years hence, they've produced 14 movies. When many of us think about innovation, though, we think about an Einstein having an 'Aha!' moment. But innovation is not about solo genius, it's about collective genius. To make a Pixar movie takes about 250 people four to five years.
What we know is, at the heart of innovation is a paradox. You have to unleash the talents and passions of many people and you have to harness them into a work that is actually useful. Innovative organizations are communities that have three capabilities: creative abrasion, creative agility and creative resolution.
Creative abrasion is about being able to create a marketplace of ideas through debate and discourse. Individuals in innovative organizations learn how to inquire, they learn how to actively listen, they also learn how to advocate for their point of view.
Creative agility is about being able to test and refine that portfolio of ideas through quick pursuit, reflection and adjustment. It's about discovery-driven learning where you act, as opposed to plan, your way to the future. It's about running a series of experiments, and not a series of pilots. Experiments are usually about learning. When you get a negative outcome, you're still really learning. Pilots are often about being right. When they don't work, someone or something is to blame.
The final capability is creative resolution. This is about doing decision making in a way that you can actually combine even opposing ideas to reconfigure them in new combinations to produce a solution that is new and useful. When you look at innovative organizations, they never go along to get along. They have developed a rather patient and inclusive decision making process that allows for both/and solutions to arise and not simply either/or solutions.
The infrastructure group of Google is the group that has to keep the website up and running 24/7. When Google was about to introduce Gmail and YouTube, they knew that their data storage system wasn't adequate. Bill Coughran and his leadership team had to figure out what to do about this situation. Instead of creating a group to tackle this task, they decided to allow groups to emerge spontaneously around different alternatives. Two groups coalesced. Big Table proposed that they build on the current system. Build It From Scratch proposed that it was time for a whole new system.
Early on, the teams were encouraged to build prototypes so that they could "bump them up against reality and discover for themselves the strengths and weaknesses of their particular approach." One of the engineers went to Bill and said, "We're all too busy for this inefficient system of running parallel experiments." But as the process unfolded, he began to understand the wisdom He admitted, "If you had forced us to all be on one team, we might have focused on proving who was right, and winning, and not on learning and discovering what was the best answer for Google."
We studied a general counsel in a pharmaceutical company who had to figure out how to get the outside lawyers, 19 competitors, to collaborate and innovate. We also studied Vineet Nayar at HCL Technologies. At HCL technologies the leaders had learned to see their role as setting direction and making sure that no one deviated from it. Vineet inverted the pyramid so that he could unleash the power of the many by loosening the stranglehold of the few.
Q. What is the main difference between a pilot and an experiment
DIRECTIONS for the question: Read the passage and answer the question based on it.
Leading innovation is not about creating a vision, and inspiring others to execute it. But what do we mean by innovation? An innovation is anything that is both new and useful. Many of you have seen a Pixar movie, but very few of you would recognize Ed Catmull, the founder and CEO of Pixar. It took Ed and his colleagues nearly 20 years to create the first full-length C.G. movie. In the 20 years hence, they've produced 14 movies. When many of us think about innovation, though, we think about an Einstein having an 'Aha!' moment. But innovation is not about solo genius, it's about collective genius. To make a Pixar movie takes about 250 people four to five years.
What we know is, at the heart of innovation is a paradox. You have to unleash the talents and passions of many people and you have to harness them into a work that is actually useful. Innovative organizations are communities that have three capabilities: creative abrasion, creative agility and creative resolution.
Creative abrasion is about being able to create a marketplace of ideas through debate and discourse. Individuals in innovative organizations learn how to inquire, they learn how to actively listen, they also learn how to advocate for their point of view.
Creative agility is about being able to test and refine that portfolio of ideas through quick pursuit, reflection and adjustment. It's about discovery-driven learning where you act, as opposed to plan, your way to the future. It's about running a series of experiments, and not a series of pilots. Experiments are usually about learning. When you get a negative outcome, you're still really learning. Pilots are often about being right. When they don't work, someone or something is to blame.
The final capability is creative resolution. This is about doing decision making in a way that you can actually combine even opposing ideas to reconfigure them in new combinations to produce a solution that is new and useful. When you look at innovative organizations, they never go along to get along. They have developed a rather patient and inclusive decision making process that allows for both/and solutions to arise and not simply either/or solutions.
The infrastructure group of Google is the group that has to keep the website up and running 24/7. When Google was about to introduce Gmail and YouTube, they knew that their data storage system wasn't adequate. Bill Coughran and his leadership team had to figure out what to do about this situation. Instead of creating a group to tackle this task, they decided to allow groups to emerge spontaneously around different alternatives. Two groups coalesced. Big Table proposed that they build on the current system. Build It From Scratch proposed that it was time for a whole new system.
Early on, the teams were encouraged to build prototypes so that they could "bump them up against reality and discover for themselves the strengths and weaknesses of their particular approach." One of the engineers went to Bill and said, "We're all too busy for this inefficient system of running parallel experiments." But as the process unfolded, he began to understand the wisdom He admitted, "If you had forced us to all be on one team, we might have focused on proving who was right, and winning, and not on learning and discovering what was the best answer for Google."
We studied a general counsel in a pharmaceutical company who had to figure out how to get the outside lawyers, 19 competitors, to collaborate and innovate. We also studied Vineet Nayar at HCL Technologies. At HCL technologies the leaders had learned to see their role as setting direction and making sure that no one deviated from it. Vineet inverted the pyramid so that he could unleash the power of the many by loosening the stranglehold of the few.
Q. According to the article, what role does vision play in innovation leadership?
DIRECTIONS for the question: Read the passage and answer the question based on it.
Leading innovation is not about creating a vision, and inspiring others to execute it. But what do we mean by innovation? An innovation is anything that is both new and useful. Many of you have seen a Pixar movie, but very few of you would recognize Ed Catmull, the founder and CEO of Pixar. It took Ed and his colleagues nearly 20 years to create the first full-length C.G. movie. In the 20 years hence, they've produced 14 movies. When many of us think about innovation, though, we think about an Einstein having an 'Aha!' moment. But innovation is not about solo genius, it's about collective genius. To make a Pixar movie takes about 250 people four to five years.
What we know is, at the heart of innovation is a paradox. You have to unleash the talents and passions of many people and you have to harness them into a work that is actually useful. Innovative organizations are communities that have three capabilities: creative abrasion, creative agility and creative resolution.
Creative abrasion is about being able to create a marketplace of ideas through debate and discourse. Individuals in innovative organizations learn how to inquire, they learn how to actively listen, they also learn how to advocate for their point of view.
Creative agility is about being able to test and refine that portfolio of ideas through quick pursuit, reflection and adjustment. It's about discovery-driven learning where you act, as opposed to plan, your way to the future. It's about running a series of experiments, and not a series of pilots. Experiments are usually about learning. When you get a negative outcome, you're still really learning. Pilots are often about being right. When they don't work, someone or something is to blame.
The final capability is creative resolution. This is about doing decision making in a way that you can actually combine even opposing ideas to reconfigure them in new combinations to produce a solution that is new and useful. When you look at innovative organizations, they never go along to get along. They have developed a rather patient and inclusive decision making process that allows for both/and solutions to arise and not simply either/or solutions.
The infrastructure group of Google is the group that has to keep the website up and running 24/7. When Google was about to introduce Gmail and YouTube, they knew that their data storage system wasn't adequate. Bill Coughran and his leadership team had to figure out what to do about this situation. Instead of creating a group to tackle this task, they decided to allow groups to emerge spontaneously around different alternatives. Two groups coalesced. Big Table proposed that they build on the current system. Build It From Scratch proposed that it was time for a whole new system.
Early on, the teams were encouraged to build prototypes so that they could "bump them up against reality and discover for themselves the strengths and weaknesses of their particular approach." One of the engineers went to Bill and said, "We're all too busy for this inefficient system of running parallel experiments." But as the process unfolded, he began to understand the wisdom He admitted, "If you had forced us to all be on one team, we might have focused on proving who was right, and winning, and not on learning and discovering what was the best answer for Google."
We studied a general counsel in a pharmaceutical company who had to figure out how to get the outside lawyers, 19 competitors, to collaborate and innovate. We also studied Vineet Nayar at HCL Technologies. At HCL technologies the leaders had learned to see their role as setting direction and making sure that no one deviated from it. Vineet inverted the pyramid so that he could unleash the power of the many by loosening the stranglehold of the few.
Q. What best exemplifies the learning-from-your-mistakes approach?
DIRECTIONS for the question : Read the passage and answer the question based on it.
Melancholy is a word that has fallen out of favor for describing the condition we now call depression. The fact that our language has changed, without the earlier word disappearing completely, indicates that we are still able to make use of both. Like most synonyms, melancholy and depression are not in fact synonymous, but slips of the tongue in a language we’re still learning. We keep trying to specify our experience of mental suffering, but all our new words constellate instead of consolidate meaning. In the essay collection Under the Sign of Saturn, Susan Sontag writes about her intellectual heroes, who all suffer solitude, ill temper, existential distress and creative block. They all breathe black air. According to her diagnostic model, they are all “melancholics.” Sontag doesn’t use the word depression in the company of her role models, but elsewhere she draws what seems like an easy distinction: “Depression is melancholy minus its charms.” But what are the charms of melancholy?
There is a long history in Western thought associating melancholy and genius. We have van Gogh with his severed ear. We have Montaigne confessing, “It was a melancholy humor … which first put into my head this raving concern with writing.” We have Nina Simone and Kurt Cobain, Thelonious Monk and David Foster Wallace. We have the stubborn conviction that all of these artists produced the work they did not in spite of, but somehow because of, their suffering. The charms of melancholy seem to be the charms of van Gogh’s quietly kaleidoscopic color palette: in one self-portrait, every color used on his face is echoed elsewhere in the surroundings. His white bandage complements the canvas in the corner, his yellow skin the wall, his blue hat the blue window. The charms of his work become the charms of his persona and his predicament.
But there’s another kind of portrait possible: the melancholic has not always and everywhere been cast as the romantic hero. In fact, Montaigne’s discussion of melancholy was meant as a kind of Neoplatonic corrective to the old medieval typology of the four humors which cast the “melancholic,” choking on an excess of black bile, as an unfortunate miser and sluggard, despised for his unsociability and general incompetence. That sounds more like it. Indeed, the medieval portrait of melancholy seems to have something in common with our understanding of depression today—or at least of the depressed person we see in pharmaceutical advertisements, whose disease seems to be lack of interest in the family barbecue. We do have our share of romantic geniuses—the suicide of David Foster Wallace is a dark lodestar over recent generations of writers. The pharmacological discourse of depression has not entirely replaced the romantic discourse of melancholy. But on the whole, contemporary American culture seems committed to a final solution.
Both stigmatization and sanctification come with real ethical dangers. On the one hand, there is the danger that hidden in the wish for the elimination of depressive symptoms is a wish for the elimination of other essential attributes of the depressed person—her posture of persistent critique, her intolerance for small talk. On the other hand there is the danger of taking pleasure in the pain of the melancholic, and of adding the expectation of insight to the already oppressive expectations the melancholic likely has for herself. But these ethical dangers are not simply imposed on the unfortunate person from the outside. It is not only the culture at large that oscillates between understanding psychological suffering as a sign of genius and a mark of shame. The language used in both discourses bears a striking resemblance to the language the depressed person uses in her own head.
Q. It can be inferred from the passage that artists such as Nina Simone and Kurt Cobain, Thelonious Monk and David Foster Wallace are attributed to:
DIRECTIONS for the question : Read the passage and answer the question based on it.
Melancholy is a word that has fallen out of favor for describing the condition we now call depression. The fact that our language has changed, without the earlier word disappearing completely, indicates that we are still able to make use of both. Like most synonyms, melancholy and depression are not in fact synonymous, but slips of the tongue in a language we’re still learning. We keep trying to specify our experience of mental suffering, but all our new words constellate instead of consolidate meaning. In the essay collection Under the Sign of Saturn, Susan Sontag writes about her intellectual heroes, who all suffer solitude, ill temper, existential distress and creative block. They all breathe black air. According to her diagnostic model, they are all “melancholics.” Sontag doesn’t use the word depression in the company of her role models, but elsewhere she draws what seems like an easy distinction: “Depression is melancholy minus its charms.” But what are the charms of melancholy?
There is a long history in Western thought associating melancholy and genius. We have van Gogh with his severed ear. We have Montaigne confessing, “It was a melancholy humor … which first put into my head this raving concern with writing.” We have Nina Simone and Kurt Cobain, Thelonious Monk and David Foster Wallace. We have the stubborn conviction that all of these artists produced the work they did not in spite of, but somehow because of, their suffering. The charms of melancholy seem to be the charms of van Gogh’s quietly kaleidoscopic color palette: in one self-portrait, every color used on his face is echoed elsewhere in the surroundings. His white bandage complements the canvas in the corner, his yellow skin the wall, his blue hat the blue window. The charms of his work become the charms of his persona and his predicament.
But there’s another kind of portrait possible: the melancholic has not always and everywhere been cast as the romantic hero. In fact, Montaigne’s discussion of melancholy was meant as a kind of Neoplatonic corrective to the old medieval typology of the four humors which cast the “melancholic,” choking on an excess of black bile, as an unfortunate miser and sluggard, despised for his unsociability and general incompetence. That sounds more like it. Indeed, the medieval portrait of melancholy seems to have something in common with our understanding of depression today—or at least of the depressed person we see in pharmaceutical advertisements, whose disease seems to be lack of interest in the family barbecue. We do have our share of romantic geniuses—the suicide of David Foster Wallace is a dark lodestar over recent generations of writers. The pharmacological discourse of depression has not entirely replaced the romantic discourse of melancholy. But on the whole, contemporary American culture seems committed to a final solution.
Both stigmatization and sanctification come with real ethical dangers. On the one hand, there is the danger that hidden in the wish for the elimination of depressive symptoms is a wish for the elimination of other essential attributes of the depressed person—her posture of persistent critique, her intolerance for small talk. On the other hand there is the danger of taking pleasure in the pain of the melancholic, and of adding the expectation of insight to the already oppressive expectations the melancholic likely has for herself. But these ethical dangers are not simply imposed on the unfortunate person from the outside. It is not only the culture at large that oscillates between understanding psychological suffering as a sign of genius and a mark of shame. The language used in both discourses bears a striking resemblance to the language the depressed person uses in her own head.
Q. The author of the passage :
DIRECTIONS for the question : Read the passage and answer the question based on it.
Melancholy is a word that has fallen out of favor for describing the condition we now call depression. The fact that our language has changed, without the earlier word disappearing completely, indicates that we are still able to make use of both. Like most synonyms, melancholy and depression are not in fact synonymous, but slips of the tongue in a language we’re still learning. We keep trying to specify our experience of mental suffering, but all our new words constellate instead of consolidate meaning. In the essay collection Under the Sign of Saturn, Susan Sontag writes about her intellectual heroes, who all suffer solitude, ill temper, existential distress and creative block. They all breathe black air. According to her diagnostic model, they are all “melancholics.” Sontag doesn’t use the word depression in the company of her role models, but elsewhere she draws what seems like an easy distinction: “Depression is melancholy minus its charms.” But what are the charms of melancholy?
There is a long history in Western thought associating melancholy and genius. We have van Gogh with his severed ear. We have Montaigne confessing, “It was a melancholy humor … which first put into my head this raving concern with writing.” We have Nina Simone and Kurt Cobain, Thelonious Monk and David Foster Wallace. We have the stubborn conviction that all of these artists produced the work they did not in spite of, but somehow because of, their suffering. The charms of melancholy seem to be the charms of van Gogh’s quietly kaleidoscopic color palette: in one self-portrait, every color used on his face is echoed elsewhere in the surroundings. His white bandage complements the canvas in the corner, his yellow skin the wall, his blue hat the blue window. The charms of his work become the charms of his persona and his predicament.
But there’s another kind of portrait possible: the melancholic has not always and everywhere been cast as the romantic hero. In fact, Montaigne’s discussion of melancholy was meant as a kind of Neoplatonic corrective to the old medieval typology of the four humors which cast the “melancholic,” choking on an excess of black bile, as an unfortunate miser and sluggard, despised for his unsociability and general incompetence. That sounds more like it. Indeed, the medieval portrait of melancholy seems to have something in common with our understanding of depression today—or at least of the depressed person we see in pharmaceutical advertisements, whose disease seems to be lack of interest in the family barbecue. We do have our share of romantic geniuses—the suicide of David Foster Wallace is a dark lodestar over recent generations of writers. The pharmacological discourse of depression has not entirely replaced the romantic discourse of melancholy. But on the whole, contemporary American culture seems committed to a final solution.
Both stigmatization and sanctification come with real ethical dangers. On the one hand, there is the danger that hidden in the wish for the elimination of depressive symptoms is a wish for the elimination of other essential attributes of the depressed person—her posture of persistent critique, her intolerance for small talk. On the other hand there is the danger of taking pleasure in the pain of the melancholic, and of adding the expectation of insight to the already oppressive expectations the melancholic likely has for herself. But these ethical dangers are not simply imposed on the unfortunate person from the outside. It is not only the culture at large that oscillates between understanding psychological suffering as a sign of genius and a mark of shame. The language used in both discourses bears a striking resemblance to the language the depressed person uses in her own head.
Q. The tone of the author of the passage can be identified as:
DIRECTIONS for the question : Read the passage and answer the question based on it.
Melancholy is a word that has fallen out of favor for describing the condition we now call depression. The fact that our language has changed, without the earlier word disappearing completely, indicates that we are still able to make use of both. Like most synonyms, melancholy and depression are not in fact synonymous, but slips of the tongue in a language we’re still learning. We keep trying to specify our experience of mental suffering, but all our new words constellate instead of consolidate meaning. In the essay collection Under the Sign of Saturn, Susan Sontag writes about her intellectual heroes, who all suffer solitude, ill temper, existential distress and creative block. They all breathe black air. According to her diagnostic model, they are all “melancholics.” Sontag doesn’t use the word depression in the company of her role models, but elsewhere she draws what seems like an easy distinction: “Depression is melancholy minus its charms.” But what are the charms of melancholy?
There is a long history in Western thought associating melancholy and genius. We have van Gogh with his severed ear. We have Montaigne confessing, “It was a melancholy humor … which first put into my head this raving concern with writing.” We have Nina Simone and Kurt Cobain, Thelonious Monk and David Foster Wallace. We have the stubborn conviction that all of these artists produced the work they did not in spite of, but somehow because of, their suffering. The charms of melancholy seem to be the charms of van Gogh’s quietly kaleidoscopic color palette: in one self-portrait, every color used on his face is echoed elsewhere in the surroundings. His white bandage complements the canvas in the corner, his yellow skin the wall, his blue hat the blue window. The charms of his work become the charms of his persona and his predicament.
But there’s another kind of portrait possible: the melancholic has not always and everywhere been cast as the romantic hero. In fact, Montaigne’s discussion of melancholy was meant as a kind of Neoplatonic corrective to the old medieval typology of the four humors which cast the “melancholic,” choking on an excess of black bile, as an unfortunate miser and sluggard, despised for his unsociability and general incompetence. That sounds more like it. Indeed, the medieval portrait of melancholy seems to have something in common with our understanding of depression today—or at least of the depressed person we see in pharmaceutical advertisements, whose disease seems to be lack of interest in the family barbecue. We do have our share of romantic geniuses—the suicide of David Foster Wallace is a dark lodestar over recent generations of writers. The pharmacological discourse of depression has not entirely replaced the romantic discourse of melancholy. But on the whole, contemporary American culture seems committed to a final solution.
Both stigmatization and sanctification come with real ethical dangers. On the one hand, there is the danger that hidden in the wish for the elimination of depressive symptoms is a wish for the elimination of other essential attributes of the depressed person—her posture of persistent critique, her intolerance for small talk. On the other hand there is the danger of taking pleasure in the pain of the melancholic, and of adding the expectation of insight to the already oppressive expectations the melancholic likely has for herself. But these ethical dangers are not simply imposed on the unfortunate person from the outside. It is not only the culture at large that oscillates between understanding psychological suffering as a sign of genius and a mark of shame. The language used in both discourses bears a striking resemblance to the language the depressed person uses in her own head.
Q. According to the author of the passage and the information given in the passage:
I. We have arrived at a single consolidated dictionary of terms to define mental suffering.
II. Melancholy and depressed are not the same.
III. According to a certain stream of thought, melancholy is the source of artistic creation and endeavor and not the outcome of artistic processes.
DIRECTIONS for the question: Read the passage and answer the question based on it.
DISMAL may not be the most desirable of modifiers, but economists love it when people call their discipline a science. They consider themselves the most rigorous of social scientists. Yet whereas their peers in the natural sciences can edit genes and spot new planets, economists cannot reliably predict, let alone prevent, recessions or other economic events. Indeed, some claim that economics is based not so much on empirical observation and rational analysis as on ideology.
In October Russell Roberts, a research fellow at Stanford University's Hoover Institution, tweeted that if told an economist's view on one issue, he could confidently predict his or her position on any number of other questions. Prominent bloggers on economics have since furiously defended the profession, citing cases when economists changed their minds in response to new facts, rather than hewing stubbornly to dogma. Adam Ozimek, an economist at Moody's Analytics, pointed to Narayana Kocherlakota, president of the Federal Reserve Bank of Minneapolis from 2009 to 2015, who flipped from hawkishness to dovishness when reality failed to affirm his warnings of a looming surge in inflation. Tyler Cowen, an economist at George Mason, published a list of issues on which his opinion has shifted (he is no longer sure that income from capital is best left untaxed). Paul Krugman, an economist and New York Times columnist, chimed in. He changed his view on the minimum wage after research found that increases up to a certain point reduced employment only marginally (this newspaper had a similar change of heart).
Economists, to be fair, are constrained in ways that many scientists are not. They cannot brew up endless recessions in test tubes to work out what causes what, for instance. Yet the same restriction applies to many hard sciences, too: geologists did not need to recreate the Earth in the lab to get a handle on plate tectonics. The essence of science is agreeing on a shared approach for generating widely accepted knowledge. Science, wrote Paul Romer, an economist, in a paper published last year, leads to broad consensus. Politics does not.
Nor, it seems, does economics. In a paper on macroeconomics published in 2006, Gregory Mankiw of Harvard University declared: 'A new consensus has emerged about the best way to understand economic fluctuations.' But after the financial crisis prompted a wrenching recession, disagreement about the causes and cures raged. 'Schlock economics' was how Robert Lucas, a Nobel-prize-winning economist, described Barack Obama's plan for a big stimulus to revive the American economy. Mr Krugman, another Nobel-winner, reckoned Mr Lucas and his sort were responsible for a 'dark age of macroeconomics'.
As Mr Roberts suggested, economists tend to fall into rival camps defined by distinct beliefs. Anthony Randazzo of the Reason Foundation, a libertarian think-tank, and Jonathan Haidt of New York University recently asked a group of academic economists both moral questions (is it fairer to divide resources equally, or according to effort?) and questions about economics. They found a high correlation between the economists' views on ethics and on economics. The correlation was not limited to matters of debate" how much governments should intervene to reduce inequality, say" but also encompassed more empirical questions, such as how fiscal austerity affects economies on the ropes. Another study found that, in supposedly empirical research, right-leaning economists discerned more economically damaging effects from increases in taxes than left-leaning ones.
That is worrying. Yet is it unusual, compared with other fields? Gunnar Myrdal, yet another Nobel-winning economist, once argued that scientists of all sorts rely on preconceptions. "Questions must be asked before answers can be given," he quipped. A survey conducted in 2003 among practitioners of six social sciences found that economics was no more political than the other fields, just more finely balanced ideologically: left-leaning economists outnumbered right-leaning ones by three to one, compared with a ratio of 30:1 in anthropology.
Q. According to the information given in the passage:
I. Scientists and economists are similar.
II. Scientists and economists are not similar.
III. Scientists are more accurate than economists.
IV. Scientists are less disputative that economists.
DIRECTIONS for the question: Read the passage and answer the question based on it.
DISMAL may not be the most desirable of modifiers, but economists love it when people call their discipline a science. They consider themselves the most rigorous of social scientists. Yet whereas their peers in the natural sciences can edit genes and spot new planets, economists cannot reliably predict, let alone prevent, recessions or other economic events. Indeed, some claim that economics is based not so much on empirical observation and rational analysis as on ideology.
In October Russell Roberts, a research fellow at Stanford University's Hoover Institution, tweeted that if told an economist's view on one issue, he could confidently predict his or her position on any number of other questions. Prominent bloggers on economics have since furiously defended the profession, citing cases when economists changed their minds in response to new facts, rather than hewing stubbornly to dogma. Adam Ozimek, an economist at Moody's Analytics, pointed to Narayana Kocherlakota, president of the Federal Reserve Bank of Minneapolis from 2009 to 2015, who flipped from hawkishness to dovishness when reality failed to affirm his warnings of a looming surge in inflation. Tyler Cowen, an economist at George Mason, published a list of issues on which his opinion has shifted (he is no longer sure that income from capital is best left untaxed). Paul Krugman, an economist and New York Times columnist, chimed in. He changed his view on the minimum wage after research found that increases up to a certain point reduced employment only marginally (this newspaper had a similar change of heart).
Economists, to be fair, are constrained in ways that many scientists are not. They cannot brew up endless recessions in test tubes to work out what causes what, for instance. Yet the same restriction applies to many hard sciences, too: geologists did not need to recreate the Earth in the lab to get a handle on plate tectonics. The essence of science is agreeing on a shared approach for generating widely accepted knowledge. Science, wrote Paul Romer, an economist, in a paper published last year, leads to broad consensus. Politics does not.
Nor, it seems, does economics. In a paper on macroeconomics published in 2006, Gregory Mankiw of Harvard University declared: 'A new consensus has emerged about the best way to understand economic fluctuations.' But after the financial crisis prompted a wrenching recession, disagreement about the causes and cures raged. 'Schlock economics' was how Robert Lucas, a Nobel-prize-winning economist, described Barack Obama's plan for a big stimulus to revive the American economy. Mr Krugman, another Nobel-winner, reckoned Mr Lucas and his sort were responsible for a 'dark age of macroeconomics'.
As Mr Roberts suggested, economists tend to fall into rival camps defined by distinct beliefs. Anthony Randazzo of the Reason Foundation, a libertarian think-tank, and Jonathan Haidt of New York University recently asked a group of academic economists both moral questions (is it fairer to divide resources equally, or according to effort?) and questions about economics. They found a high correlation between the economists' views on ethics and on economics. The correlation was not limited to matters of debate" how much governments should intervene to reduce inequality, say" but also encompassed more empirical questions, such as how fiscal austerity affects economies on the ropes. Another study found that, in supposedly empirical research, right-leaning economists discerned more economically damaging effects from increases in taxes than left-leaning ones.
That is worrying. Yet is it unusual, compared with other fields? Gunnar Myrdal, yet another Nobel-winning economist, once argued that scientists of all sorts rely on preconceptions. "Questions must be asked before answers can be given," he quipped. A survey conducted in 2003 among practitioners of six social sciences found that economics was no more political than the other fields, just more finely balanced ideologically: left-leaning economists outnumbered right-leaning ones by three to one, compared with a ratio of 30:1 in anthropology.
Q. Economics is closer to:
DIRECTIONS for the question: Read the passage and answer the question based on it.
DISMAL may not be the most desirable of modifiers, but economists love it when people call their discipline a science. They consider themselves the most rigorous of social scientists. Yet whereas their peers in the natural sciences can edit genes and spot new planets, economists cannot reliably predict, let alone prevent, recessions or other economic events. Indeed, some claim that economics is based not so much on empirical observation and rational analysis as on ideology.
In October Russell Roberts, a research fellow at Stanford University's Hoover Institution, tweeted that if told an economist's view on one issue, he could confidently predict his or her position on any number of other questions. Prominent bloggers on economics have since furiously defended the profession, citing cases when economists changed their minds in response to new facts, rather than hewing stubbornly to dogma. Adam Ozimek, an economist at Moody's Analytics, pointed to Narayana Kocherlakota, president of the Federal Reserve Bank of Minneapolis from 2009 to 2015, who flipped from hawkishness to dovishness when reality failed to affirm his warnings of a looming surge in inflation. Tyler Cowen, an economist at George Mason, published a list of issues on which his opinion has shifted (he is no longer sure that income from capital is best left untaxed). Paul Krugman, an economist and New York Times columnist, chimed in. He changed his view on the minimum wage after research found that increases up to a certain point reduced employment only marginally (this newspaper had a similar change of heart).
Economists, to be fair, are constrained in ways that many scientists are not. They cannot brew up endless recessions in test tubes to work out what causes what, for instance. Yet the same restriction applies to many hard sciences, too: geologists did not need to recreate the Earth in the lab to get a handle on plate tectonics. The essence of science is agreeing on a shared approach for generating widely accepted knowledge. Science, wrote Paul Romer, an economist, in a paper published last year, leads to broad consensus. Politics does not.
Nor, it seems, does economics. In a paper on macroeconomics published in 2006, Gregory Mankiw of Harvard University declared: 'A new consensus has emerged about the best way to understand economic fluctuations.' But after the financial crisis prompted a wrenching recession, disagreement about the causes and cures raged. 'Schlock economics' was how Robert Lucas, a Nobel-prize-winning economist, described Barack Obama's plan for a big stimulus to revive the American economy. Mr Krugman, another Nobel-winner, reckoned Mr Lucas and his sort were responsible for a 'dark age of macroeconomics'.
As Mr Roberts suggested, economists tend to fall into rival camps defined by distinct beliefs. Anthony Randazzo of the Reason Foundation, a libertarian think-tank, and Jonathan Haidt of New York University recently asked a group of academic economists both moral questions (is it fairer to divide resources equally, or according to effort?) and questions about economics. They found a high correlation between the economists' views on ethics and on economics. The correlation was not limited to matters of debate"how much governments should intervene to reduce inequality, say"but also encompassed more empirical questions, such as how fiscal austerity affects economies on the ropes. Another study found that, in supposedly empirical research, right-leaning economists discerned more economically damaging effects from increases in taxes than left-leaning ones.
That is worrying. Yet is it unusual, compared with other fields? Gunnar Myrdal, yet another Nobel-winning economist, once argued that scientists of all sorts rely on preconceptions. "Questions must be asked before answers can be given," he quipped. A survey conducted in 2003 among practitioners of six social sciences found that economics was no more political than the other fields, just more finely balanced ideologically: left-leaning economists outnumbered right-leaning ones by three to one, compared with a ratio of 30:1 in anthropology.
Q. It can be inferred from the passage that:
DIRECTIONS for the question: Read the passage and answer the question based on it.
DISMAL may not be the most desirable of modifiers, but economists love it when people call their discipline a science. They consider themselves the most rigorous of social scientists. Yet whereas their peers in the natural sciences can edit genes and spot new planets, economists cannot reliably predict, let alone prevent, recessions or other economic events. Indeed, some claim that economics is based not so much on empirical observation and rational analysis as on ideology.
In October Russell Roberts, a research fellow at Stanford University's Hoover Institution, tweeted that if told an economist's view on one issue, he could confidently predict his or her position on any number of other questions. Prominent bloggers on economics have since furiously defended the profession, citing cases when economists changed their minds in response to new facts, rather than hewing stubbornly to dogma. Adam Ozimek, an economist at Moody's Analytics, pointed to Narayana Kocherlakota, president of the Federal Reserve Bank of Minneapolis from 2009 to 2015, who flipped from hawkishness to dovishness when reality failed to affirm his warnings of a looming surge in inflation. Tyler Cowen, an economist at George Mason, published a list of issues on which his opinion has shifted (he is no longer sure that income from capital is best left untaxed). Paul Krugman, an economist and New York Times columnist, chimed in. He changed his view on the minimum wage after research found that increases up to a certain point reduced employment only marginally (this newspaper had a similar change of heart).
Economists, to be fair, are constrained in ways that many scientists are not. They cannot brew up endless recessions in test tubes to work out what causes what, for instance. Yet the same restriction applies to many hard sciences, too: geologists did not need to recreate the Earth in the lab to get a handle on plate tectonics. The essence of science is agreeing on a shared approach for generating widely accepted knowledge. Science, wrote Paul Romer, an economist, in a paper published last year, leads to broad consensus. Politics does not.
Nor, it seems, does economics. In a paper on macroeconomics published in 2006, Gregory Mankiw of Harvard University declared: 'A new consensus has emerged about the best way to understand economic fluctuations.' But after the financial crisis prompted a wrenching recession, disagreement about the causes and cures raged. 'Schlock economics' was how Robert Lucas, a Nobel-prize-winning economist, described Barack Obama's plan for a big stimulus to revive the American economy. Mr Krugman, another Nobel-winner, reckoned Mr Lucas and his sort were responsible for a 'dark age of macroeconomics'.
As Mr Roberts suggested, economists tend to fall into rival camps defined by distinct beliefs. Anthony Randazzo of the Reason Foundation, a libertarian think-tank, and Jonathan Haidt of New York University recently asked a group of academic economists both moral questions (is it fairer to divide resources equally, or according to effort?) and questions about economics. They found a high correlation between the economists' views on ethics and on economics. The correlation was not limited to matters of debate" how much governments should intervene to reduce inequality, say"but also encompassed more empirical questions, such as how fiscal austerity affects economies on the ropes. Another study found that, in supposedly empirical research, right-leaning economists discerned more economically damaging effects from increases in taxes than left-leaning ones.
That is worrying. Yet is it unusual, compared with other fields? Gunnar Myrdal, yet another Nobel-winning economist, once argued that scientists of all sorts rely on preconceptions. "Questions must be asked before answers can be given," he quipped. A survey conducted in 2003 among practitioners of six social sciences found that economics was no more political than the other fields, just more finely balanced ideologically: left-leaning economists outnumbered right-leaning ones by three to one, compared with a ratio of 30:1 in anthropology.
Q. The tone and attitude of the author of the passage can said to be:
DIRECTIONS for the question: Read the passage and answer the question based on it.
DISMAL may not be the most desirable of modifiers, but economists love it when people call their discipline a science. They consider themselves the most rigorous of social scientists. Yet whereas their peers in the natural sciences can edit genes and spot new planets, economists cannot reliably predict, let alone prevent, recessions or other economic events. Indeed, some claim that economics is based not so much on empirical observation and rational analysis as on ideology.
In October Russell Roberts, a research fellow at Stanford University's Hoover Institution, tweeted that if told an economist's view on one issue, he could confidently predict his or her position on any number of other questions. Prominent bloggers on economics have since furiously defended the profession, citing cases when economists changed their minds in response to new facts, rather than hewing stubbornly to dogma. Adam Ozimek, an economist at Moody's Analytics, pointed to Narayana Kocherlakota, president of the Federal Reserve Bank of Minneapolis from 2009 to 2015, who flipped from hawkishness to dovishness when reality failed to affirm his warnings of a looming surge in inflation. Tyler Cowen, an economist at George Mason, published a list of issues on which his opinion has shifted (he is no longer sure that income from capital is best left untaxed). Paul Krugman, an economist and New York Times columnist, chimed in. He changed his view on the minimum wage after research found that increases up to a certain point reduced employment only marginally (this newspaper had a similar change of heart).
Economists, to be fair, are constrained in ways that many scientists are not. They cannot brew up endless recessions in test tubes to work out what causes what, for instance. Yet the same restriction applies to many hard sciences, too: geologists did not need to recreate the Earth in the lab to get a handle on plate tectonics. The essence of science is agreeing on a shared approach for generating widely accepted knowledge. Science, wrote Paul Romer, an economist, in a paper published last year, leads to broad consensus. Politics does not.
Nor, it seems, does economics. In a paper on macroeconomics published in 2006, Gregory Mankiw of Harvard University declared: 'A new consensus has emerged about the best way to understand economic fluctuations.' But after the financial crisis prompted a wrenching recession, disagreement about the causes and cures raged. 'Schlock economics' was how Robert Lucas, a Nobel-prize-winning economist, described Barack Obama's plan for a big stimulus to revive the American economy. Mr Krugman, another Nobel-winner, reckoned Mr Lucas and his sort were responsible for a 'dark age of macroeconomics'.
As Mr Roberts suggested, economists tend to fall into rival camps defined by distinct beliefs. Anthony Randazzo of the Reason Foundation, a libertarian think-tank, and Jonathan Haidt of New York University recently asked a group of academic economists both moral questions (is it fairer to divide resources equally, or according to effort?) and questions about economics. They found a high correlation between the economists' views on ethics and on economics. The correlation was not limited to matters of debate"how much governments should intervene to reduce inequality, say"but also encompassed more empirical questions, such as how fiscal austerity affects economies on the ropes. Another study found that, in supposedly empirical research, right-leaning economists discerned more economically damaging effects from increases in taxes than left-leaning ones.
That is worrying. Yet is it unusual, compared with other fields? Gunnar Myrdal, yet another Nobel-winning economist, once argued that scientists of all sorts rely on preconceptions. "Questions must be asked before answers can be given," he quipped. A survey conducted in 2003 among practitioners of six social sciences found that economics was no more political than the other fields, just more finely balanced ideologically: left-leaning economists outnumbered right-leaning ones by three to one, compared with a ratio of 30:1 in anthropology.
Q. A suitable title for the passage is:
DIRECTIONS for the question: Identify the most appropriate summary for the paragraph.
Pretentiousness is always someone else's crime. It's never a felony in the first person. You might cop to the odd personality flaw; the occasional pirouette of self-deprecation is nothing if not good manners. Most likely one of those imperfections nobody minds owning up to, something that looks charming in the right circumstances. Being absent-minded. A bad dancer. Partial to a large gin after work. But being pretentious? That's premier-league obnoxious, the team-mate of arrogance, condescension, careerism and pomposity. Pretension brunches with fraudulence and snobbery, and shops for baubles with the pseudo and the vacuous. Whatever it is you do, I'll bet you'd never think it pretentious. That's because you do it, and pretension never self-identifies. Pretentiousness happens over there. In the way he writes. In her music taste. In the way they dress. And who hasn't before described a person, place or thing as pretentious?
DIRECTIONS for the question: Five sentences related to a topic are given below. Four of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out. Choose its number as your answer and key it in.
1. Wait, though. Rub your eyes, refocus your gaze, and really, is there any real reason why this ought to be weird?
2. Earlier this year, the 17-year-old son of Will Smith and Jada Pinkett Smith, brother of Willow, appeared in a Louis Vuitton womenswear campaign.
3. If you wanted to choose a celebrity avatar for everything supposedly weird about The Youth, you could do worse than Jaden Smith: a gnomic tweeter, sometime crystal devotee, self-described "Future of Music, Photography, and Filmmaking," who has little attachment to the gender binary.
4. Jaden Smith, quasar of contemporary teen behaviors, wears a fringed white top and an embellished, knee-length black skirt.
5. The impulse to re examine assumptions has had practical consequences " gender-neutral college dorms and high-school bathrooms " and cultural ripples.
DIRECTIONS for question: Four sentences related to a topic are given below. Three of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out. Choose its number as your answer and key it in.
1. Tensions were brewing within the Gulf Cooperation Council for the past six years ever since Qatar started actively supporting the Muslim Brotherhood, a political Islamist movement that the Saudis and their close allies see as a threat to stability in West Asia.
2. The countries said they would halt all land, air and sea traffic with Qatar, eject its diplomats and order Qatari citizens to leave all the Gulf states within 14 days.
3. Saudi Arabia blames Qatar for “harbouring a multitude of terrorist and sectarian groups that aim to create instability in the region”. But such allegations can be raised against most Gulf countries.
4. The dramatic decision by Saudi Arabia, the United Arab Emirates, Bahrain, Egypt and Yemen to suspend diplomatic ties with Qatar could have far-reaching economic and geopolitical consequences.
DIRECTIONS for the question: The five sentences (labelled 1,2,3,4, and 5) given in this question, when properly sequenced, form a coherent paragraph. Decide on the proper order for the sentence and key in this sequence of five numbers as your answer.
1. Technology has given people too many choices, and then instantly relieved them of the need to make them.
2. It now turns out that, even in a potentially unlimited digital marketplace, social networks, rankings, recommendation algorithms and the like focus people attentions on just a few items in the same way.
3. Whatever the arena, the biggest crowds will increasingly gravitate towards just a small number of the most popular hits.
4. Until recently that was seen as a natural consequence of the physical limits on production and distribution.
5. The story of mass entertainment in the internet age is a paradox.
DIRECTIONS for the question: Identify the most appropriate summary for the paragraph.
In a 1994 case, the Supreme Court clarified the issue of transformative use.
Has the material been used to help create something new or merely copied verbatim into another work?
When taking portions of copyrighted work ask yourself the following questions:
Has the material you have taken from the original work been transformed by adding new expression or meaning?
Was value added to the original by creating new information, new aesthetics, new insights, and understandings?
Q. Based on the guidelines above, which of the following would be a summary written in an example form and write the key for most appropriate option?
DIRECTIONS for the question: The five sentences (labelled 1,2,3,4, and 5) given in this question, when properly sequenced, form a coherent paragraph. Decide on the proper order for the sentence and key in this sequence of five numbers as your answer.
1. Thus, unless the plastic is specially designed to decompose in the soil, such materials can last a very long time.
2. This depends upon the plastic (polymer) and the environment to which it is exposed.
3. Commercially available plastics (polyolefin like polyethylene, polypropylene, etc.) have been further made resistant to decomposition by means of additional stabilizers like antioxidants.
4. Plastics do decompose, though not fully, over a very long period of time (in average 100 to 500 years).
5. This means that soil microorganisms that can easily attack and decompose things like wood and other formerly living materials cannot break the various kinds of strong bonds that are common to most plastics.
DIRECTIONS for the question: The four sentences (labelled 1,2,3 and 4) given in this question, when properly sequenced, from a coherent paragraph. Decide on the proper order for the sentence and key in this sequence of four numbers as your answer.
1. They turn and aerate the soil and make passageways for water drainage, playing a vital role in maintaining soil fecundity and health, they truly are, as biologist E. Q. Wilson has pointed out, the little things that run the world.
2. All their materials, even their most deadly chemical weapons, are biodegradable, and when they return to the soil, they supply nutrients, restoring in the process some of those that were taken to support the colony.
3. Just as there is almost no corner of the globe untouched by human presence, there is almost no land habitat, from harsh desert to inner city, untouched by some species of ant.
4. But although they may run the world, they do not overrun it.
DIRECTIONS for the question: Five sentences related to a topic are given below. Four of them can be put together to form a meaningful and coherent short paragraph. Identify the odd one out. Choose its number as your answer and key it in.
1. This, rather than self-discipline or self-control, per se, is what children would benefit from developing.
2. What counts is the capacity to choose whether and when to persevere, to control oneself, to follow the rules rather than the simple tendency to do these things in every situation.
3. Remarkably, the predictive power of self-control is comparable to that of either general intelligence or family socioeconomic status.
4. But such a formulation is very different from the uncritical celebration of self-discipline that we find in the field of education and throughout our culture.
5. It’s not just that self-control isn’t always good; it’s that a lack of self-control isn’t always bad because it may “provide the basis for spontaneity, flexibility, expressions of interpersonal warmth, openness to experience, and creative recognitions.”
DIRECTIONS for the question: Read the information given below and answer the question that follows.
All Iconia computers are available with at least one pre-loaded program from each of three categories: Word Processor – F, G, H; Databases - O, P, R; Browsers – T, U, W. When installing these programs, the company ensures that:
Q. How many possible combinations of programs can be loaded on to the computer if P is the only database loaded on the computer?
DIRECTIONS for the question: Read the information given below and answer the question that follows.
All Iconia computers are available with at least one pre-loaded program from each of three categories: Word Processor – F, G, H; Databases - O, P, R; Browsers – T, U, W. When installing these programs, the company ensures that:
Q. If two browsers are loaded on the computer, which of the following cannot be true?
DIRECTIONS for the question: Go through the following graph/information and answer the question that follows.
As a part of the Best City contest, a news channel invited ten eminent personalities - Q, R, S, T, U, V, W, X, Y and Z - and asked each of them to vote for one of the four shortlisted cities - Bangalore, Delhi, Hyderabad and Mumbai - in each of the two categories-most beautiful city and most happening city. The sum of the number of votes obtained by a city in these two categories put together is considered to be the total number of votes for the city. The city with the maximum total number of votes is finally adjudged as the Best City
After the voting, it was found that:
(i) no two cities got the same number of votes in the most beautiful city category and the same was the case in the most happening city category. However, every city got at least one vote in each of the two categories.
(ii) No two cities got the same total number of votes and Hyderabad emerged as the winner of the contest.
(iii) In case of S and T, in each of the two categories, S voted for the same city as T. However, the same cannot be said to be true for any other pair of persons.
(iv) In the most beautiful city category, no other person voted for the city for which R voted and the same was the case in the most happening city category,
(v) "Except V, who voted for Hyderabad in both the categories and Y, who voted for Bangalore in both the categories, no other person voted for the same city in both the categories,
(vi) Q did not vote for Hyderabad in the most beautiful city category,
(vii) U and W voted for the same city in the most happening city category.
(viii) ln the most beautiful city category, only W and X Voted for Mumbai, while S voted for Bangalore.
Q. Which city did Z vote for as the most beautiful city ?
DIRECTIONS for the question: Go through the following graph/information and answer the question that follows.
As a part of the Best City contest, a news channel invited ten eminent personalities - Q, R, S, T, U, V, W, X, Y and Z - and asked each of them to vote for one of the four shortlisted cities - Bangalore, Delhi, Hyderabad and Mumbai - in each of the two categories-most beautiful city and most happening city. The sum of the number of votes obtained by a city in these two categories put together is considered to be the total number of votes for the city. The city with the maximum total number of votes is finally adjudged as the Best City
After the voting, it was found that,
(i) no two cities got the same number of votes in the most beautiful city category and the same was the case in the most happening city category. However, every city got at least one vote in each of the two categories.
(ii) No two cities got the same total number of votes and Hyderabad emerged as the winner of the contest.
(iii) In case of S and T, in each of the two categories, S voted for the same city as T. However, the same cannot be said to be true for any other pair of persons.
(iv) In the most beautiful city category, no other person voted for the city for which R voted and the same was the case in the most happening city category,
(v) "Except V, who voted for Hyderabad in both the categories and Y, who voted for Bangalore in both the categories, no other person voted for the same city in both the categories,
(vi) Q did not vote for Hyderabad in the most beautiful city category,
(vii) U and W voted for the same city in the most happening city category.
(viii) ln the most beautiful city category, only W and X Voted for Mumbai, while S voted for Bangalore.
Q. Which of the following pairs of persons voted for Bangalore as the most beautiful city?
DIRECTIONS for the question: Go through the following graph/information and answer the question that follows.
As a part of the Best City contest, a news channel invited ten eminent personalities - Q, R, S, T, U, V, W, X, Y and Z - and asked each of them to vote for one of the four shortlisted cities - Bangalore, Delhi, Hyderabad and Mumbai - in each of the two categories-most beautiful city and most happening city. The sum of the number of votes obtained by a city in these two categories put together is considered to be the total number of votes for the city. The city with the maximum total number of votes is finally adjudged as the Best City
After the voting, it was found that,
(i) no two cities got the same number of votes in the most beautiful city category and the same was the case in the most happening city category. However, every city got at least one vote in each of the two categories.
(ii) No two cities got the same total number of votes and Hyderabad emerged as the winner of the contest.
(iii) In case of S and T, in each of the two categories, S voted for the same city as T. However, the same cannot be said to be true for any other pair of persons.
(iv) In the most beautiful city category, no other person voted for the city for which R voted and the same was the case in the most happening city category,
(v) "Except V, who voted for Hyderabad in both the categories and Y, who voted for Bangalore in both the categories, no other person voted for the same city in both the categories,
(vi) Q did not vote for Hyderabad in the most beautiful city category,
(vii) U and W voted for the same city in the most happening city category.
(viii) ln the most beautiful city category, only W and X Voted for Mumbai, while S voted for Bangalore.
Q. Which of the following pairs of persons voted for the same city in the most happening city category?
5 videos|378 docs|164 tests
|
5 videos|378 docs|164 tests
|