Directions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember? You might say that those are obviously not part of the mind, because they aren't in the head, but that would be to beg the question. So are they or aren't they?
Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer's patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he'll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to the museum, and, knowing that his notebook contains the address, he looks it up.
Before Inga consulted her memory or Otto his notebook, neither one of them had the address "Fifty-third Street" consciously in mind. So what's the difference?
Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.
How is it that human thought is so deeply different from that of other animals, even though our brains can be quite similar? The difference is due, he believes, to our heightened ability to incorporate props and tools into our thinking, to use them to think thoughts we could never have otherwise.
One problem with his Otto example, Clark thinks, is that it can suggest that a mind becomes extended only when the ordinary brain isn't working as it should and needs a supplement—something like a hearing aid for cognition. This in turn suggests that a person whose mind is deeply linked to devices must be a medical patient or else a rare, strange, hybrid creature out of science fiction—a cyborg. But in fact, he thinks we are all cyborgs, in the most natural way.
The idea of an extended mind has itself extended far beyond philosophy. Clark's idea has inspired research in the various disciplines in the area of cognitive science (neuroscience, psychology, linguistics, A.I., robotics) and in distant fields beyond. It is clear to him that the way you understand yourself and your relation to the world is not just a matter of arguments: your life's experiences construct what you expect and want to be true.
Q. Which of the following statements best presents Clark's view?
Directions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember? You might say that those are obviously not part of the mind, because they aren't in the head, but that would be to beg the question. So are they or aren't they?
Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer's patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he'll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to the museum, and, knowing that his notebook contains the address, he looks it up.
Before Inga consulted her memory or Otto his notebook, neither one of them had the address "Fifty-third Street" consciously in mind. So what's the difference?
Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.
How is it that human thought is so deeply different from that of other animals, even though our brains can be quite similar? The difference is due, he believes, to our heightened ability to incorporate props and tools into our thinking, to use them to think thoughts we could never have otherwise.
One problem with his Otto example, Clark thinks, is that it can suggest that a mind becomes extended only when the ordinary brain isn't working as it should and needs a supplement—something like a hearing aid for cognition. This in turn suggests that a person whose mind is deeply linked to devices must be a medical patient or else a rare, strange, hybrid creature out of science fiction—a cyborg. But in fact, he thinks we are all cyborgs, in the most natural way.
The idea of an extended mind has itself extended far beyond philosophy. Clark's idea has inspired research in the various disciplines in the area of cognitive science (neuroscience, psychology, linguistics, A.I., robotics) and in distant fields beyond. It is clear to him that the way you understand yourself and your relation to the world is not just a matter of arguments: your life's experiences construct what you expect and want to be true.
Q. Which of the following statements, if true, would not substantiate the fact that we all are cyborgs, in the most natural way?
1 Crore+ students have signed up on EduRev. Have you? Download the App |
Directions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember? You might say that those are obviously not part of the mind, because they aren't in the head, but that would be to beg the question. So are they or aren't they?
Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer's patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he'll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to the museum, and, knowing that his notebook contains the address, he looks it up.
Before Inga consulted her memory or Otto his notebook, neither one of them had the address "Fifty-third Street" consciously in mind. So what's the difference?
Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.
How is it that human thought is so deeply different from that of other animals, even though our brains can be quite similar? The difference is due, he believes, to our heightened ability to incorporate props and tools into our thinking, to use them to think thoughts we could never have otherwise.
One problem with his Otto example, Clark thinks, is that it can suggest that a mind becomes extended only when the ordinary brain isn't working as it should and needs a supplement—something like a hearing aid for cognition. This in turn suggests that a person whose mind is deeply linked to devices must be a medical patient or else a rare, strange, hybrid creature out of science fiction—a cyborg. But in fact, he thinks we are all cyborgs, in the most natural way.
The idea of an extended mind has itself extended far beyond philosophy. Clark's idea has inspired research in the various disciplines in the area of cognitive science (neuroscience, psychology, linguistics, A.I., robotics) and in distant fields beyond. It is clear to him that the way you understand yourself and your relation to the world is not just a matter of arguments: your life's experiences construct what you expect and want to be true.
Q. Which of the following statements strengthens the author's opinion about Clark's study?
Directions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember? You might say that those are obviously not part of the mind, because they aren't in the head, but that would be to beg the question. So are they or aren't they?
Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer's patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he'll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to the museum, and, knowing that his notebook contains the address, he looks it up.
Before Inga consulted her memory or Otto his notebook, neither one of them had the address "Fifty-third Street" consciously in mind. So what's the difference?
Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help.
How is it that human thought is so deeply different from that of other animals, even though our brains can be quite similar? The difference is due, he believes, to our heightened ability to incorporate props and tools into our thinking, to use them to think thoughts we could never have otherwise.
One problem with his Otto example, Clark thinks, is that it can suggest that a mind becomes extended only when the ordinary brain isn't working as it should and needs a supplement—something like a hearing aid for cognition. This in turn suggests that a person whose mind is deeply linked to devices must be a medical patient or else a rare, strange, hybrid creature out of science fiction—a cyborg. But in fact, he thinks we are all cyborgs, in the most natural way.
The idea of an extended mind has itself extended far beyond philosophy. Clark's idea has inspired research in the various disciplines in the area of cognitive science (neuroscience, psychology, linguistics, A.I., robotics) and in distant fields beyond. It is clear to him that the way you understand yourself and your relation to the world is not just a matter of arguments: your life's experiences construct what you expect and want to be true.
Q. What according to Clark is true about Igna and Otto?
Directions: Read the passage and answer the question based on it.
Ninety seven years old; this is the age of the oldest student in the world who graduated from a university, who has recently received a Master's in Clinical Science - 76 years after attaining his first university degree. However, as extreme as this example may seem, a question remains highly debatable: Is it ever too late to study?
Studying is one of the main tools used to gain knowledge in a variety of subjects, notions, and the world around us in general. It is the basic channel of perception of reality, which we otherwise would be unable to understand. Ignoring the opportunity to study would basically be equal to ignoring the whole world that surrounds us. Therefore, it stands as perfectly reasonable to further one's education after graduating from high school.
Higher education in the United States is viewed as a wise choice, as it invests in one's future and gains an asset that will be one's stronghold when climbing up the social ladder. Many high school graduates do not have an opportunity to continue their education right away. Therefore, when a person has already achieved a stable career and knows exactly what it is he or she wants to deepen their knowledge in, he or she has the right to continue.
With the American educational system being arguably a flexible one, you do not even have to become a full-time student anymore to learn more about the subjects that interest you. You can take a few courses at a certain university, pay the fees, and attend the classes for your own purposes.
Nevertheless, it is believed that after some point in life, it becomes too late for activities such as being a student. Choosing to be a student in many cases means you are willing and able to take on the whole package, or otherwise you risk feeling like an outcast and dropping out of school, even if the classes are interesting and the professors are fantastic. When you consider applying to a specific university after a certain age, when the above mentioned happenings become of a lesser value to you, look at those institutions that are more flexible and do not require living on campus and fully engaging in the academic and non-academic sides of university life.
Another factor that might get in the way of effective studying after a certain age is your capabilities. If you decide to finally become a student, it is implied that you have the required desire to learn, listen, and absorb the knowledge. However, unfortunately, sometimes just the desire itself is not enough. It is a known fact that with age, our memory, attention, and ability to learn may decrease greatly - studying may become a much more difficult challenge than it could have been when we were younger. At the same time, if you have the dedication, motivation, and persistence to become a student at a later stage in life, I suppose these traits will aid you in achieving your aim as well.
Studying is a necessity rather than a privilege. It should never be too late to study if a person wants to. There is no doubt that setting an expiration date on one's opportunity to learn and follow their dreams would be wrong. Despite all the analyzed obstacles that may possibly come in the way of studying, they should not become an insurmountable barrier in the path of one's self-actualisation.
Q. According to the author of the passage, higher education is considered a "wise choice" in the United States because
Directions: Read the passage and answer the question based on it.
Ninety seven years old; this is the age of the oldest student in the world who graduated from a university, who has recently received a Master's in Clinical Science - 76 years after attaining his first university degree. However, as extreme as this example may seem, a question remains highly debatable: Is it ever too late to study?
Studying is one of the main tools used to gain knowledge in a variety of subjects, notions, and the world around us in general. It is the basic channel of perception of reality, which we otherwise would be unable to understand. Ignoring the opportunity to study would basically be equal to ignoring the whole world that surrounds us. Therefore, it stands as perfectly reasonable to further one's education after graduating from high school.
Higher education in the United States is viewed as a wise choice, as it invests in one's future and gains an asset that will be one's stronghold when climbing up the social ladder. Many high school graduates do not have an opportunity to continue their education right away. Therefore, when a person has already achieved a stable career and knows exactly what it is he or she wants to deepen their knowledge in, he or she has the right to continue.
With the American educational system being arguably a flexible one, you do not even have to become a full-time student anymore to learn more about the subjects that interest you. You can take a few courses at a certain university, pay the fees, and attend the classes for your own purposes.
Nevertheless, it is believed that after some point in life, it becomes too late for activities such as being a student. Choosing to be a student in many cases means you are willing and able to take on the whole package, or otherwise you risk feeling like an outcast and dropping out of school, even if the classes are interesting and the professors are fantastic. When you consider applying to a specific university after a certain age, when the above mentioned happenings become of a lesser value to you, look at those institutions that are more flexible and do not require living on campus and fully engaging in the academic and non-academic sides of university life.
Another factor that might get in the way of effective studying after a certain age is your capabilities. If you decide to finally become a student, it is implied that you have the required desire to learn, listen, and absorb the knowledge. However, unfortunately, sometimes just the desire itself is not enough. It is a known fact that with age, our memory, attention, and ability to learn may decrease greatly - studying may become a much more difficult challenge than it could have been when we were younger. At the same time, if you have the dedication, motivation, and persistence to become a student at a later stage in life, I suppose these traits will aid you in achieving your aim as well.
Studying is a necessity rather than a privilege. It should never be too late to study if a person wants to. There is no doubt that setting an expiration date on one's opportunity to learn and follow their dreams would be wrong. Despite all the analyzed obstacles that may possibly come in the way of studying, they should not become an insurmountable barrier in the path of one's self-actualisation.
Q. Each of the following statements can be derived about studying from the passage EXCEPT:
Directions: Read the passage and answer the question based on it.
Ninety seven years old; this is the age of the oldest student in the world who graduated from a university, who has recently received a Master's in Clinical Science - 76 years after attaining his first university degree. However, as extreme as this example may seem, a question remains highly debatable: Is it ever too late to study?
Studying is one of the main tools used to gain knowledge in a variety of subjects, notions, and the world around us in general. It is the basic channel of perception of reality, which we otherwise would be unable to understand. Ignoring the opportunity to study would basically be equal to ignoring the whole world that surrounds us. Therefore, it stands as perfectly reasonable to further one's education after graduating from high school.
Higher education in the United States is viewed as a wise choice, as it invests in one's future and gains an asset that will be one's stronghold when climbing up the social ladder. Many high school graduates do not have an opportunity to continue their education right away. Therefore, when a person has already achieved a stable career and knows exactly what it is he or she wants to deepen their knowledge in, he or she has the right to continue.
With the American educational system being arguably a flexible one, you do not even have to become a full-time student anymore to learn more about the subjects that interest you. You can take a few courses at a certain university, pay the fees, and attend the classes for your own purposes.
Nevertheless, it is believed that after some point in life, it becomes too late for activities such as being a student. Choosing to be a student in many cases means you are willing and able to take on the whole package, or otherwise you risk feeling like an outcast and dropping out of school, even if the classes are interesting and the professors are fantastic. When you consider applying to a specific university after a certain age, when the above mentioned happenings become of a lesser value to you, look at those institutions that are more flexible and do not require living on campus and fully engaging in the academic and non-academic sides of university life.
Another factor that might get in the way of effective studying after a certain age is your capabilities. If you decide to finally become a student, it is implied that you have the required desire to learn, listen, and absorb the knowledge. However, unfortunately, sometimes just the desire itself is not enough. It is a known fact that with age, our memory, attention, and ability to learn may decrease greatly - studying may become a much more difficult challenge than it could have been when we were younger. At the same time, if you have the dedication, motivation, and persistence to become a student at a later stage in life, I suppose these traits will aid you in achieving your aim as well.
Studying is a necessity rather than a privilege. It should never be too late to study if a person wants to. There is no doubt that setting an expiration date on one's opportunity to learn and follow their dreams would be wrong. Despite all the analyzed obstacles that may possibly come in the way of studying, they should not become an insurmountable barrier in the path of one's self-actualisation.
Q. Which of the following statements can be inferred from the information provided in the passage?
Directions: Read the passage and answer the question based on it.
Ninety seven years old; this is the age of the oldest student in the world who graduated from a university, who has recently received a Master's in Clinical Science - 76 years after attaining his first university degree. However, as extreme as this example may seem, a question remains highly debatable: Is it ever too late to study?
Studying is one of the main tools used to gain knowledge in a variety of subjects, notions, and the world around us in general. It is the basic channel of perception of reality, which we otherwise would be unable to understand. Ignoring the opportunity to study would basically be equal to ignoring the whole world that surrounds us. Therefore, it stands as perfectly reasonable to further one's education after graduating from high school.
Higher education in the United States is viewed as a wise choice, as it invests in one's future and gains an asset that will be one's stronghold when climbing up the social ladder. Many high school graduates do not have an opportunity to continue their education right away. Therefore, when a person has already achieved a stable career and knows exactly what it is he or she wants to deepen their knowledge in, he or she has the right to continue.
With the American educational system being arguably a flexible one, you do not even have to become a full-time student anymore to learn more about the subjects that interest you. You can take a few courses at a certain university, pay the fees, and attend the classes for your own purposes.
Nevertheless, it is believed that after some point in life, it becomes too late for activities such as being a student. Choosing to be a student in many cases means you are willing and able to take on the whole package, or otherwise you risk feeling like an outcast and dropping out of school, even if the classes are interesting and the professors are fantastic. When you consider applying to a specific university after a certain age, when the above mentioned happenings become of a lesser value to you, look at those institutions that are more flexible and do not require living on campus and fully engaging in the academic and non-academic sides of university life.
Another factor that might get in the way of effective studying after a certain age is your capabilities. If you decide to finally become a student, it is implied that you have the required desire to learn, listen, and absorb the knowledge. However, unfortunately, sometimes just the desire itself is not enough. It is a known fact that with age, our memory, attention, and ability to learn may decrease greatly - studying may become a much more difficult challenge than it could have been when we were younger. At the same time, if you have the dedication, motivation, and persistence to become a student at a later stage in life, I suppose these traits will aid you in achieving your aim as well.
Studying is a necessity rather than a privilege. It should never be too late to study if a person wants to. There is no doubt that setting an expiration date on one's opportunity to learn and follow their dreams would be wrong. Despite all the analyzed obstacles that may possibly come in the way of studying, they should not become an insurmountable barrier in the path of one's self-actualisation.
Q. As stated in the passage, the author believes that after a certain age
Directions: Read the following passage carefully and answer the given question.
The human story is not looking much like a smooth record of upward progress just now. We are more fragile than we had been led to assume. And this means that we are also less different from our ancestors than we normally like to think – and that the more secure and prosperous members of the human race are less different from their fellow-human beings than they find comfortable. Our ancestors, right up to the modern age, knew they were fragile. A brief period of dazzling technological achievement combined with the absence of any major global war produced the belief that fragility was on the retreat and that making our global environment lastingly secure or controllable was within reach. But the same technical achievements that had generated this belief turned out to be among the major destabilizing influences in the material environment. And the absence of major global conflict sat alongside the proliferation of bitter and vicious local struggles, often civil wars that trailed on for decades.
For the foreseeable future, we shall have to get used to this fragility; and we are going to need considerable imaginative resources to cope with it. In the past, people have found resources like this in art and religion. Today it is crucial to learn to see the sciences as a resource and not a threat or a rival to what these older elements offer. Belittling the imaginative inspiration of authentic science is as fatuous as the view that sees the arts as just a pleasant extra in human life, or religion as an outdated kind of scientific explanation. Just because inflated claims are made for science, and unrealistic hopes are raised, it is dangerously easy to forget why and how it matters, and to be lured into the bizarre world in which the minority report in science is given inflated importance just because we have been disappointed about the utterly unqualified certainty that we thought we had been promised.
Science helps us live with our fragility by giving us a way of connecting with each other, recognising that it is the same world that we all live in. But what science alone does not do is build the motivation for a deeper level of connection.
This is where art comes in. Like the sciences, it makes us shelve our self-oriented habits for a bit. If science helps us discover that there are things to talk about that are not determined just by the self-interest of the people talking, art opens us up to how the stranger feels, uncovering connections where we had not expected them. What religion adds to this is a further level of motivation. Being more deeply connected will not take away the fragility of our condition, but it will help us see that we can actually learn from and with each other.
Q. "For the foreseeable future, we shall have to get used to this fragility; and we are going to need considerable imaginative resources to cope with it." What does the author try to imply when he says this?
Directions: Read the following passage carefully and answer the given question.
The human story is not looking much like a smooth record of upward progress just now. We are more fragile than we had been led to assume. And this means that we are also less different from our ancestors than we normally like to think – and that the more secure and prosperous members of the human race are less different from their fellow-human beings than they find comfortable. Our ancestors, right up to the modern age, knew they were fragile. A brief period of dazzling technological achievement combined with the absence of any major global war produced the belief that fragility was on the retreat and that making our global environment lastingly secure or controllable was within reach. But the same technical achievements that had generated this belief turned out to be among the major destabilizing influences in the material environment. And the absence of major global conflict sat alongside the proliferation of bitter and vicious local struggles, often civil wars that trailed on for decades.
For the foreseeable future, we shall have to get used to this fragility; and we are going to need considerable imaginative resources to cope with it. In the past, people have found resources like this in art and religion. Today it is crucial to learn to see the sciences as a resource and not a threat or a rival to what these older elements offer. Belittling the imaginative inspiration of authentic science is as fatuous as the view that sees the arts as just a pleasant extra in human life, or religion as an outdated kind of scientific explanation. Just because inflated claims are made for science, and unrealistic hopes are raised, it is dangerously easy to forget why and how it matters, and to be lured into the bizarre world in which the minority report in science is given inflated importance just because we have been disappointed about the utterly unqualified certainty that we thought we had been promised.
Science helps us live with our fragility by giving us a way of connecting with each other, recognising that it is the same world that we all live in. But what science alone does not do is build the motivation for a deeper level of connection.
This is where art comes in. Like the sciences, it makes us shelve our self-oriented habits for a bit. If science helps us discover that there are things to talk about that are not determined just by the self-interest of the people talking, art opens us up to how the stranger feels, uncovering connections where we had not expected them. What religion adds to this is a further level of motivation. Being more deeply connected will not take away the fragility of our condition, but it will help us see that we can actually learn from and with each other.
Q. From the statement "we are also less different from our ancestors than we normally like to think", which of the following can be inferred?
Directions: Read the following passage carefully and answer the given question.
The human story is not looking much like a smooth record of upward progress just now. We are more fragile than we had been led to assume. And this means that we are also less different from our ancestors than we normally like to think – and that the more secure and prosperous members of the human race are less different from their fellow-human beings than they find comfortable. Our ancestors, right up to the modern age, knew they were fragile. A brief period of dazzling technological achievement combined with the absence of any major global war produced the belief that fragility was on the retreat and that making our global environment lastingly secure or controllable was within reach. But the same technical achievements that had generated this belief turned out to be among the major destabilizing influences in the material environment. And the absence of major global conflict sat alongside the proliferation of bitter and vicious local struggles, often civil wars that trailed on for decades.
For the foreseeable future, we shall have to get used to this fragility; and we are going to need considerable imaginative resources to cope with it. In the past, people have found resources like this in art and religion. Today it is crucial to learn to see the sciences as a resource and not a threat or a rival to what these older elements offer. Belittling the imaginative inspiration of authentic science is as fatuous as the view that sees the arts as just a pleasant extra in human life, or religion as an outdated kind of scientific explanation. Just because inflated claims are made for science, and unrealistic hopes are raised, it is dangerously easy to forget why and how it matters, and to be lured into the bizarre world in which the minority report in science is given inflated importance just because we have been disappointed about the utterly unqualified certainty that we thought we had been promised.
Science helps us live with our fragility by giving us a way of connecting with each other, recognising that it is the same world that we all live in. But what science alone does not do is build the motivation for a deeper level of connection.
This is where art comes in. Like the sciences, it makes us shelve our self-oriented habits for a bit. If science helps us discover that there are things to talk about that are not determined just by the self-interest of the people talking, art opens us up to how the stranger feels, uncovering connections where we had not expected them. What religion adds to this is a further level of motivation. Being more deeply connected will not take away the fragility of our condition, but it will help us see that we can actually learn from and with each other.
Q. Which of the following most accurately describes the role of science as mentioned in the passage in tackling the fragility that we experience?
Directions: Read the following passage carefully and answer the given question.
The human story is not looking much like a smooth record of upward progress just now. We are more fragile than we had been led to assume. And this means that we are also less different from our ancestors than we normally like to think – and that the more secure and prosperous members of the human race are less different from their fellow-human beings than they find comfortable. Our ancestors, right up to the modern age, knew they were fragile. A brief period of dazzling technological achievement combined with the absence of any major global war produced the belief that fragility was on the retreat and that making our global environment lastingly secure or controllable was within reach. But the same technical achievements that had generated this belief turned out to be among the major destabilizing influences in the material environment. And the absence of major global conflict sat alongside the proliferation of bitter and vicious local struggles, often civil wars that trailed on for decades.
For the foreseeable future, we shall have to get used to this fragility; and we are going to need considerable imaginative resources to cope with it. In the past, people have found resources like this in art and religion. Today it is crucial to learn to see the sciences as a resource and not a threat or a rival to what these older elements offer. Belittling the imaginative inspiration of authentic science is as fatuous as the view that sees the arts as just a pleasant extra in human life, or religion as an outdated kind of scientific explanation. Just because inflated claims are made for science, and unrealistic hopes are raised, it is dangerously easy to forget why and how it matters, and to be lured into the bizarre world in which the minority report in science is given inflated importance just because we have been disappointed about the utterly unqualified certainty that we thought we had been promised.
Science helps us live with our fragility by giving us a way of connecting with each other, recognising that it is the same world that we all live in. But what science alone does not do is build the motivation for a deeper level of connection.
This is where art comes in. Like the sciences, it makes us shelve our self-oriented habits for a bit. If science helps us discover that there are things to talk about that are not determined just by the self-interest of the people talking, art opens us up to how the stranger feels, uncovering connections where we had not expected them. What religion adds to this is a further level of motivation. Being more deeply connected will not take away the fragility of our condition, but it will help us see that we can actually learn from and with each other.
Q. It can be inferred that the author of the passage is most likely to agree with each of the following statements EXCEPT:
Directions: Read the passage and answer the following question.
Polio – like several other diseases including COVID-19 – is an infection that spreads by stealth. For every case of paralytic or fatal polio, there are 100-200 cases without any symptoms.
Germs have a variety of strategies for reproducing and transmitting to new hosts – strategies shaped by the action of natural selection such that only the fittest survive. Some germs, such as smallpox, spread through contact, but they also have another, more powerful way of persisting: they're durable in the external environment. Smallpox virus particles can remain infectious for years if they're buried in a scab. That's one way the virus can keep infecting and spreading: it waits for a new host to happen by. Spreading through water or by insect vectors are strategies, too.
But spread by stealth is another strategy and, perhaps, the most terrifying of all. We have been told, for years, to fear pandemics: SARS and MERS (both caused by coronaviruses), Zika, Ebola, the highly pathogenic H5N1 bird flu. But perhaps we've been fearing the wrong thing. It's not just new diseases we have to fear. It's those that spread by stealth.
Variola virus, which caused smallpox, one of the deadliest viruses known, had one signal vulnerability: you could see it. Smallpox left its marks on everyone. Some cases were milder than others, but the pox had a tell. It let you know – with a germ's equivalent of a roar – where it had been, and that made it easier to eradicate than polio. You knew who was stricken, you learned whom they'd been in contact with, and you vaccinated those people. This technique – ring vaccination – drove smallpox off the Earth. Yet, despite years of relentless work, the World Health Organization has still been unable to eradicate polio.
Pathogens that spread by stealth have stalked us through human history. The Black Death of 1346-53 was the greatest pandemic in human history: it burned through the entire known world and killed an estimated 25 million people in Europe alone. But the Black Death behaved very differently from most plague outbreaks today. Plague is a rodent disease, carried, in much of the world, by rats and rat fleas. It's lethal, but it's sluggish. The Black Death moved through England at the rate of 2.5 miles a day. No rat-borne disease could possibly have spread that fast.
But researchers retrieving bacterial DNA from Black Death victims proved plague did indeed cause the Black Death, leaving scientists with something of a mystery: how did it move so quickly? Finally, Black Death transmission had another, subtler aspect: it was spread by human fleas. Pulex irritans was so common an associate of our medieval ancestors that perhaps they were hardly noticed. The human flea hides in unwashed clothes and bed linens, and it jumps with ease from host to host. Like lung-borne plague, human flea-borne plague is transmitted by stealthy means.
The medical community developed antibiotics to treat the plague. But stealth-spreading pathogens through healthy humans might not need to moderate their virulence, not quickly, or, perhaps, not at all. Polio has been with us since the dawn of recorded history, its virulence unmodified over the course of time.
Q. Which one of the following statements best summarises the author's position made evident in the passage?
Directions: Read the passage and answer the following question.
Polio – like several other diseases including COVID-19 – is an infection that spreads by stealth. For every case of paralytic or fatal polio, there are 100-200 cases without any symptoms.
Germs have a variety of strategies for reproducing and transmitting to new hosts – strategies shaped by the action of natural selection such that only the fittest survive. Some germs, such as smallpox, spread through contact, but they also have another, more powerful way of persisting: they're durable in the external environment. Smallpox virus particles can remain infectious for years if they're buried in a scab. That's one way the virus can keep infecting and spreading: it waits for a new host to happen by. Spreading through water or by insect vectors are strategies, too.
But spread by stealth is another strategy and, perhaps, the most terrifying of all. We have been told, for years, to fear pandemics: SARS and MERS (both caused by coronaviruses), Zika, Ebola, the highly pathogenic H5N1 bird flu. But perhaps we've been fearing the wrong thing. It's not just new diseases we have to fear. It's those that spread by stealth.
Variola virus, which caused smallpox, one of the deadliest viruses known, had one signal vulnerability: you could see it. Smallpox left its marks on everyone. Some cases were milder than others, but the pox had a tell. It let you know – with a germ's equivalent of a roar – where it had been, and that made it easier to eradicate than polio. You knew who was stricken, you learned whom they'd been in contact with, and you vaccinated those people. This technique – ring vaccination – drove smallpox off the Earth. Yet, despite years of relentless work, the World Health Organization has still been unable to eradicate polio.
Pathogens that spread by stealth have stalked us through human history. The Black Death of 1346-53 was the greatest pandemic in human history: it burned through the entire known world and killed an estimated 25 million people in Europe alone. But the Black Death behaved very differently from most plague outbreaks today. Plague is a rodent disease, carried, in much of the world, by rats and rat fleas. It's lethal, but it's sluggish. The Black Death moved through England at the rate of 2.5 miles a day. No rat-borne disease could possibly have spread that fast.
But researchers retrieving bacterial DNA from Black Death victims proved plague did indeed cause the Black Death, leaving scientists with something of a mystery: how did it move so quickly? Finally, Black Death transmission had another, subtler aspect: it was spread by human fleas. Pulex irritans was so common an associate of our medieval ancestors that perhaps they were hardly noticed. The human flea hides in unwashed clothes and bed linens, and it jumps with ease from host to host. Like lung-borne plague, human flea-borne plague is transmitted by stealthy means.
The medical community developed antibiotics to treat the plague. But stealth-spreading pathogens through healthy humans might not need to moderate their virulence, not quickly, or, perhaps, not at all. Polio has been with us since the dawn of recorded history, its virulence unmodified over the course of time.
Q. Which one of the following best captures how the author draw a comparison between Polio virus and Variola virus?
Directions: Read the passage and answer the following question.
Polio – like several other diseases including COVID-19 – is an infection that spreads by stealth. For every case of paralytic or fatal polio, there are 100-200 cases without any symptoms.
Germs have a variety of strategies for reproducing and transmitting to new hosts – strategies shaped by the action of natural selection such that only the fittest survive. Some germs, such as smallpox, spread through contact, but they also have another, more powerful way of persisting: they're durable in the external environment. Smallpox virus particles can remain infectious for years if they're buried in a scab. That's one way the virus can keep infecting and spreading: it waits for a new host to happen by. Spreading through water or by insect vectors are strategies, too.
But spread by stealth is another strategy and, perhaps, the most terrifying of all. We have been told, for years, to fear pandemics: SARS and MERS (both caused by coronaviruses), Zika, Ebola, the highly pathogenic H5N1 bird flu. But perhaps we've been fearing the wrong thing. It's not just new diseases we have to fear. It's those that spread by stealth.
Variola virus, which caused smallpox, one of the deadliest viruses known, had one signal vulnerability: you could see it. Smallpox left its marks on everyone. Some cases were milder than others, but the pox had a tell. It let you know – with a germ's equivalent of a roar – where it had been, and that made it easier to eradicate than polio. You knew who was stricken, you learned whom they'd been in contact with, and you vaccinated those people. This technique – ring vaccination – drove smallpox off the Earth. Yet, despite years of relentless work, the World Health Organization has still been unable to eradicate polio.
Pathogens that spread by stealth have stalked us through human history. The Black Death of 1346-53 was the greatest pandemic in human history: it burned through the entire known world and killed an estimated 25 million people in Europe alone. But the Black Death behaved very differently from most plague outbreaks today. Plague is a rodent disease, carried, in much of the world, by rats and rat fleas. It's lethal, but it's sluggish. The Black Death moved through England at the rate of 2.5 miles a day. No rat-borne disease could possibly have spread that fast.
But researchers retrieving bacterial DNA from Black Death victims proved plague did indeed cause the Black Death, leaving scientists with something of a mystery: how did it move so quickly? Finally, Black Death transmission had another, subtler aspect: it was spread by human fleas. Pulex irritans was so common an associate of our medieval ancestors that perhaps they were hardly noticed. The human flea hides in unwashed clothes and bed linens, and it jumps with ease from host to host. Like lung-borne plague, human flea-borne plague is transmitted by stealthy means.
The medical community developed antibiotics to treat the plague. But stealth-spreading pathogens through healthy humans might not need to moderate their virulence, not quickly, or, perhaps, not at all. Polio has been with us since the dawn of recorded history, its virulence unmodified over the course of time.
Q. Which of the following, if true, would lend most credence to the author's view that Black Death was controlled?
Directions: Read the passage and answer the following question.
Polio – like several other diseases including COVID-19 – is an infection that spreads by stealth. For every case of paralytic or fatal polio, there are 100-200 cases without any symptoms.
Germs have a variety of strategies for reproducing and transmitting to new hosts – strategies shaped by the action of natural selection such that only the fittest survive. Some germs, such as smallpox, spread through contact, but they also have another, more powerful way of persisting: they're durable in the external environment. Smallpox virus particles can remain infectious for years if they're buried in a scab. That's one way the virus can keep infecting and spreading: it waits for a new host to happen by. Spreading through water or by insect vectors are strategies, too.
But spread by stealth is another strategy and, perhaps, the most terrifying of all. We have been told, for years, to fear pandemics: SARS and MERS (both caused by coronaviruses), Zika, Ebola, the highly pathogenic H5N1 bird flu. But perhaps we've been fearing the wrong thing. It's not just new diseases we have to fear. It's those that spread by stealth.
Variola virus, which caused smallpox, one of the deadliest viruses known, had one signal vulnerability: you could see it. Smallpox left its marks on everyone. Some cases were milder than others, but the pox had a tell. It let you know – with a germ's equivalent of a roar – where it had been, and that made it easier to eradicate than polio. You knew who was stricken, you learned whom they'd been in contact with, and you vaccinated those people. This technique – ring vaccination – drove smallpox off the Earth. Yet, despite years of relentless work, the World Health Organization has still been unable to eradicate polio.
Pathogens that spread by stealth have stalked us through human history. The Black Death of 1346-53 was the greatest pandemic in human history: it burned through the entire known world and killed an estimated 25 million people in Europe alone. But the Black Death behaved very differently from most plague outbreaks today. Plague is a rodent disease, carried, in much of the world, by rats and rat fleas. It's lethal, but it's sluggish. The Black Death moved through England at the rate of 2.5 miles a day. No rat-borne disease could possibly have spread that fast.
But researchers retrieving bacterial DNA from Black Death victims proved plague did indeed cause the Black Death, leaving scientists with something of a mystery: how did it move so quickly? Finally, Black Death transmission had another, subtler aspect: it was spread by human fleas. Pulex irritans was so common an associate of our medieval ancestors that perhaps they were hardly noticed. The human flea hides in unwashed clothes and bed linens, and it jumps with ease from host to host. Like lung-borne plague, human flea-borne plague is transmitted by stealthy means.
The medical community developed antibiotics to treat the plague. But stealth-spreading pathogens through healthy humans might not need to moderate their virulence, not quickly, or, perhaps, not at all. Polio has been with us since the dawn of recorded history, its virulence unmodified over the course of time.
Q. Which of the following is the author most likely to agree with, in respect of human flea-borne plague?