"The Psychologist seemed about to speak to me but changed his mind. Then the Time Traveler put forth his finger towards the lever. “No,” he said suddenly. “Lend me your hand.” He took the Psychologist’s hand in his own and told him to put out his forefinger. So that it was the Psychologist himself who sent forth the model Time Machine on its interminable voyage, we all saw the lever turn. I am absolutely sure there was no trickery. There was a breath of wind, and the lamp flame jumped. One of the candles on the mantel was blown out, and the little machine suddenly swung round, became indistinct, and was seen as a ghost for a second perhaps, as an eddy of faintly glittering brass and ivory; and it was gone — vanished!
The Psychologist recovered from his stupor and suddenly looked under the table.
We stared at each other. “Do you seriously believe that machine has travelled into time?” said the Medical Man.
“You mean to say that machine has travelled into the future?” said Filby.
“Into the future or the past” said Time Traveler.
After an interval, the Psychologist had an inspiration. “It must have gone into the past if it has gone anywhere,” he said.
“Because I presume that it has not moved in space and if it travelled into the future it would still be here all this time since it must have travelled through this time.”
“But,” said I, “If it travelled into the past it would have been visible when we came first into this room; and last Thursday when we were here; and the Thursday before that; and so forth!”
“That’s a simple point of psychology. It’s plain enough and helps the paradox delightfully. We cannot see it, nor can we appreciate this machine, any more than we can speak of a wheel spinning, or a bullet flying through the air. If it is travelling through time fifty times faster than we are, if it gets through a minute while we get through a second, the impression it creates will, of course, only be one-fiftieth of what it would make if it were not travelling in time.” He passed his hand through the space in which the machine had been.
“Would you like to see the Time Machine itself?” asked the Time Traveler. And therewith, taking the lamp in his hand, he led the way down the long, draughty corridor to his laboratory. “Look here,” said the Medical Man, “are you perfectly serious? Or is this a trick, like that ghost you showed us last Christmas?”
“Upon that machine,” said the Time Traveller, holding the lamp aloft, “I intend to explore time. Is that plain? I was never more serious in my life.
” None of us quite knew how to take it.
I caught Filby’s eye over the shoulder of the Medical Man, and he winked at me solemnly."
Q. Which of the following statements can be inferred from the passage?
"The Psychologist seemed about to speak to me but changed his mind. Then the Time Traveler put forth his finger towards the lever. “No,” he said suddenly. “Lend me your hand.” He took the Psychologist’s hand in his own and told him to put out his forefinger. So that it was the Psychologist himself who sent forth the model Time Machine on its interminable voyage. We all saw the lever turn. I am absolutely certain there was no trickery. There was a breath of wind, and the lamp flame jumped. One of the candles on the mantel was blown out, and the little machine suddenly swung round, became indistinct, and was seen as a ghost for a second perhaps, as an eddy of faintly glittering brass and ivory; and it was gone — vanished!
The Psychologist recovered from his stupor and suddenly looked under the table.
We stared at each other. “Do you seriously believe that machine has travelled into time?” said the Medical Man.
“You mean to say that machine has travelled into the future?” said Filby.
“Into the future or the past” said Time Traveler.
After an interval, the Psychologist had an inspiration. “It must have gone into the past if it has gone anywhere,” he said.
“Because I presume that it has not moved in space and if it travelled into the future it would still be here all this time since it must have travelled through this time.”
“But,” said I, “If it travelled into the past it would have been visible when we came first into this room; and last Thursday when we were here; and the Thursday before that; and so forth!”
“That’s a simple point of psychology. It’s plain enough and helps the paradox delightfully. We cannot see it, nor can we appreciate this machine, any more than we can speak of a wheel spinning, or a bullet flying through the air. If it is travelling through time fifty times faster than we are, if it gets through a minute while we get through a second, the impression it creates will, of course, only be one-fiftieth of what it would make if it were not travelling in time.” He passed his hand through the space in which the machine had been.
“Would you like to see the Time Machine itself?” asked the Time Traveler. And therewith, taking the lamp in his hand, he led the way down the long, draughty corridor to his laboratory. “Look here,” said the Medical Man, “are you perfectly serious? Or is this a trick, like that ghost you showed us last Christmas?”
“Upon that machine,” said the Time Traveller, holding the lamp aloft, “I intend to explore time. Is that plain? I was never more serious in my life.
” None of us quite knew how to take it.
I caught Filby’s eye over the shoulder of the Medical Man, and he winked at me solemnly."
Q. What do you understand by the word, Stupor, according to the passage?
1 Crore+ students have signed up on EduRev. Have you? Download the App |
"The Psychologist seemed about to speak to me but changed his mind. Then the Time Traveler put forth his finger towards the lever. “No,” he said suddenly. “Lend me your hand.” He took the Psychologist’s hand in his own and told him to put out his forefinger. So that it was the Psychologist himself who sent forth the model Time Machine on its interminable voyage. We all saw the lever turn. I am absolutely certain there was no trickery. There was a breath of wind, and the lamp flame jumped. One of the candles on the mantel was blown out, and the little machine suddenly swung round, became indistinct, and was seen as a ghost for a second perhaps, as an eddy of faintly glittering brass and ivory; and it was gone — vanished!
The Psychologist recovered from his stupor and suddenly looked under the table.
We stared at each other. “Do you seriously believe that machine has travelled into time?” said the Medical Man.
“You mean to say that machine has travelled into the future?” said Filby.
“Into the future or the past” said Time Traveler.
After an interval, the Psychologist had an inspiration. “It must have gone into the past if it has gone anywhere,” he said.
“Because I presume that it has not moved in space and if it travelled into the future it would still be here all this time since it must have travelled through this time.”
“But,” said I, “If it travelled into the past it would have been visible when we came first into this room; and last Thursday when we were here; and the Thursday before that; and so forth!”
“That’s a simple point of psychology. It’s plain enough and helps the paradox delightfully. We cannot see it, nor can we appreciate this machine, any more than we can speak of a wheel spinning, or a bullet flying through the air. If it is travelling through time fifty times faster than we are, if it gets through a minute while we get through a second, the impression it creates will, of course, only be one-fiftieth of what it would make if it were not travelling in time.” He passed his hand through the space in which the machine had been.
“Would you like to see the Time Machine itself?” asked the Time Traveler. And therewith, taking the lamp in his hand, he led the way down the long, draughty corridor to his laboratory. “Look here,” said the Medical Man, “are you perfectly serious? Or is this a trick, like that ghost you showed us last Christmas?”
“Upon that machine,” said the Time Traveller, holding the lamp aloft, “I intend to explore time. Is that plain? I was never more serious in my life.
” None of us quite knew how to take it.
I caught Filby’s eye over the shoulder of the Medical Man, and he winked at me solemnly."
Q. According to the passage, which of the following statements is true?
"The Psychologist seemed about to speak to me but changed his mind. Then the Time Traveler put forth his finger towards the lever. “No,” he said suddenly. “Lend me your hand.” He took the Psychologist’s hand in his own and told him to put out his forefinger. So that it was the Psychologist himself who sent forth the model Time Machine on its interminable voyage. We all saw the lever turn. I am absolutely certain there was no trickery. There was a breath of wind, and the lamp flame jumped. One of the candles on the mantel was blown out, and the little machine suddenly swung round, became indistinct, and was seen as a ghost for a second perhaps, as an eddy of faintly glittering brass and ivory; and it was gone — vanished!
The Psychologist recovered from his stupor and suddenly looked under the table.
We stared at each other. “Do you seriously believe that machine has travelled into time?” said the Medical Man.
“You mean to say that machine has travelled into the future?” said Filby.
“Into the future or the past” said Time Traveler.
After an interval, the Psychologist had an inspiration. “It must have gone into the past if it has gone anywhere,” he said.
“Because I presume that it has not moved in space and if it travelled into the future it would still be here all this time since it must have travelled through this time.”
“But,” said I, “If it travelled into the past it would have been visible when we came first into this room; and last Thursday when we were here; and the Thursday before that; and so forth!”
“That’s a simple point of psychology. It’s plain enough and helps the paradox delightfully. We cannot see it, nor can we appreciate this machine, any more than we can speak of a wheel spinning, or a bullet flying through the air. If it is travelling through time fifty times faster than we are, if it gets through a minute while we get through a second, the impression it creates will, of course, only be one-fiftieth of what it would make if it were not travelling in time.” He passed his hand through the space in which the machine had been.
“Would you like to see the Time Machine itself?” asked the Time Traveler. And therewith, taking the lamp in his hand, he led the way down the long, draughty corridor to his laboratory. “Look here,” said the Medical Man, “are you perfectly serious? Or is this a trick, like that ghost you showed us last Christmas?”
“Upon that machine,” said the Time Traveller, holding the lamp aloft, “I intend to explore time. Is that plain? I was never more serious in my life.
” None of us quite knew how to take it.
I caught Filby’s eye over the shoulder of the Medical Man, and he winked at me solemnly."
Q. What do we understand by the character of the narrator, from the passage?
"The Psychologist seemed about to speak to me but changed his mind. Then the Time Traveler put forth his finger towards the lever. “No,” he said suddenly. “Lend me your hand.” He took the Psychologist’s hand in his own and told him to put out his forefinger. So that it was the Psychologist himself who sent forth the model Time Machine on its interminable voyage. We all saw the lever turn. I am absolutely certain there was no trickery. There was a breath of wind, and the lamp flame jumped. One of the candles on the mantel was blown out, and the little machine suddenly swung round, became indistinct, and was seen as a ghost for a second perhaps, as an eddy of faintly glittering brass and ivory; and it was gone — vanished!
The Psychologist recovered from his stupor and suddenly looked under the table.
We stared at each other. “Do you seriously believe that machine has travelled into time?” said the Medical Man.
“You mean to say that machine has travelled into the future?” said Filby.
“Into the future or the past” said Time Traveler.
After an interval, the Psychologist had an inspiration. “It must have gone into the past if it has gone anywhere,” he said.
“Because I presume that it has not moved in space and if it travelled into the future it would still be here all this time since it must have travelled through this time.”
“But,” said I, “If it travelled into the past it would have been visible when we came first into this room; and last Thursday when we were here; and the Thursday before that; and so forth!”
“That’s a simple point of psychology. It’s plain enough and helps the paradox delightfully. We cannot see it, nor can we appreciate this machine, any more than we can speak of a wheel spinning, or a bullet flying through the air. If it is travelling through time fifty times faster than we are, if it gets through a minute while we get through a second, the impression it creates will, of course, only be one-fiftieth of what it would make if it were not travelling in time.” He passed his hand through the space in which the machine had been.
“Would you like to see the Time Machine itself?” asked the Time Traveler. And therewith, taking the lamp in his hand, he led the way down the long, draughty corridor to his laboratory. “Look here,” said the Medical Man, “are you perfectly serious? Or is this a trick, like that ghost you showed us last Christmas?”
“Upon that machine,” said the Time Traveller, holding the lamp aloft, “I intend to explore time. Is that plain? I was never more serious in my life.
” None of us quite knew how to take it.
I caught Filby’s eye over the shoulder of the Medical Man, and he winked at me solemnly."
Q. According to the passage, which of the following statements cannot be considered true?
"The Psychologist seemed about to speak to me but changed his mind. Then the Time Traveler put forth his finger towards the lever. “No,” he said suddenly. “Lend me your hand.” He took the Psychologist’s hand in his own and told him to put out his forefinger. So that it was the Psychologist himself who sent forth the model Time Machine on its interminable voyage. We all saw the lever turn. I am absolutely certain there was no trickery. There was a breath of wind, and the lamp flame jumped. One of the candles on the mantel was blown out, and the little machine suddenly swung round, became indistinct, and was seen as a ghost for a second perhaps, as an eddy of faintly glittering brass and ivory; and it was gone — vanished!
The Psychologist recovered from his stupor and suddenly looked under the table.
We stared at each other. “Do you seriously believe that machine has travelled into time?” said the Medical Man.
“You mean to say that machine has travelled into the future?” said Filby.
“Into the future or the past” said Time Traveler.
After an interval, the Psychologist had an inspiration. “It must have gone into the past if it has gone anywhere,” he said.
“Because I presume that it has not moved in space and if it travelled into the future it would still be here all this time since it must have travelled through this time.”
“But,” said I, “If it travelled into the past it would have been visible when we came first into this room; and last Thursday when we were here; and the Thursday before that; and so forth!”
“That’s a simple point of psychology. It’s plain enough and helps the paradox delightfully. We cannot see it, nor can we appreciate this machine, any more than we can speak of a wheel spinning, or a bullet flying through the air. If it is travelling through time fifty times faster than we are, if it gets through a minute while we get through a second, the impression it creates will, of course, only be one-fiftieth of what it would make if it were not travelling in time.” He passed his hand through the space in which the machine had been.
“Would you like to see the Time Machine itself?” asked the Time Traveler. And therewith, taking the lamp in his hand, he led the way down the long, draughty corridor to his laboratory. “Look here,” said the Medical Man, “are you perfectly serious? Or is this a trick, like that ghost you showed us last Christmas?”
“Upon that machine,” said the Time Traveller, holding the lamp aloft, “I intend to explore time. Is that plain? I was never more serious in my life.
” None of us quite knew how to take it.
I caught Filby’s eye over the shoulder of the Medical Man, and he winked at me solemnly."
Q. What is the most suitable title for the above passage?
[1]Studies of brain evolution are compelling because of their implications for understanding human evolution. [2]Consequently, researchers are motivated by a desire to find the causes of intelligence. [3]What is intelligence? [4]It is inevitably described with respect to human attributes; we consider ourselves intelligent, and we therefore compare other species to ourselves. [5]This view is legitimized by the fact that humans do have very sophisticated brains, exhibit extraordinarily complex behavior, and cope well in novel situations, generalizing from one problem to another.
[6]Unfortunately, criteria applicable to humans are not necessarily appropriate for evaluating traits of other organisms. [7]There is no basis for the assumption that all intelligence is human-like intelligence, nor even for the preconception that all primate intelligence is human-like. [8]To say that intellectual prowess is comparative across species and to use humans as the basis for comparison is a continuation of pre-Darwinian ideas of a scala naturae dealing with intelligence. [9]If ranking species in a single phylogenetic line according to criteria based on the extant member is questionable, then certainly since ecological conditions and selection pressures change over time, ranking contemporary species separated by millions of years of evolution based on the traits exhibited by one is unjustifiable. [10]To assume a continuum of intelligence across today's species is incompatible with an evolutionary perspective, and this preconception must not be allowed to guide studies of brain evolution. [11]The information-processing systems of different animals have been designed to respond to different stimuli, diverse ""cognitive substrates,"" and therefore expectations of an interspecific regularity between these IPS and various other body measures are ill-conceived.
[12]What # lacking # a good definition # intelligence that will allow us # say something # how an animal copes # its own ecology and not how closely # approximates human behavior. [13]There are undeniable trends in the history of life -- towards larger brains in mammals and larger neocortices in primates -- but to generalize correlations of these trends into a concept of intelligence should not be attempted until an accurate definition is developed. [14]Until that time, the most that comparative brain size studies can do is demonstrate correlations and thereby pose questions for scientists who focus on the evolution of species with one of these correlated characteristics.
Q. The initial definition of ‘Intelligence’ is given with respect to Humans. This is considered acceptable to some because?
[1]Studies of brain evolution are compelling because of their implications for understanding human evolution. [2]Consequently, researchers are motivated by a desire to find the causes of intelligence. [3]What is intelligence? [4]It is inevitably described with respect to human attributes; we consider ourselves intelligent, and we therefore compare other species to ourselves. [5]This view is legitimized by the fact that humans do have very sophisticated brains, exhibit extraordinarily complex behavior, and cope well in novel situations, generalizing from one problem to another.
[6]Unfortunately, criteria applicable to humans are not necessarily appropriate for evaluating traits of other organisms. [7]There is no basis for the assumption that all intelligence is human-like intelligence, nor even for the preconception that all primate intelligence is human-like. [8]To say that intellectual prowess is comparative across species and to use humans as the basis for comparison is a continuation of pre-Darwinian ideas of a scala naturae dealing with intelligence. [9]If ranking species in a single phylogenetic line according to criteria based on the extant member is questionable, then certainly since ecological conditions and selection pressures change over time, ranking contemporary species separated by millions of years of evolution based on the traits exhibited by one is unjustifiable. [10]To assume a continuum of intelligence across today's species is incompatible with an evolutionary perspective, and this preconception must not be allowed to guide studies of brain evolution. [11]The information-processing systems of different animals have been designed to respond to different stimuli, diverse ""cognitive substrates,"" and therefore expectations of an interspecific regularity between these IPS and various other body measures are ill-conceived.
[12]What # lacking # a good definition # intelligence that will allow us # say something # how an animal copes # its own ecology and not how closely # approximates human behavior. [13]There are undeniable trends in the history of life -- towards larger brains in mammals and larger neocortices in primates -- but to generalize correlations of these trends into a concept of intelligence should not be attempted until an accurate definition is developed. [14]Until that time, the most that comparative brain size studies can do is demonstrate correlations and thereby pose questions for scientists who focus on the evolution of species with one of these correlated characteristics.
Q. The primary function of the paragraph is to?
[1]Studies of brain evolution are compelling because of their implications for understanding human evolution. [2]Consequently, researchers are motivated by a desire to find the causes of intelligence. [3]What is intelligence? [4]It is inevitably described with respect to human attributes; we consider ourselves intelligent, and we therefore compare other species to ourselves. [5]This view is legitimized by the fact that humans do have very sophisticated brains, exhibit extraordinarily complex behavior, and cope well in novel situations, generalizing from one problem to another.
[6]Unfortunately, criteria applicable to humans are not necessarily appropriate for evaluating traits of other organisms. [7]There is no basis for the assumption that all intelligence is human-like intelligence, nor even for the preconception that all primate intelligence is human-like. [8]To say that intellectual prowess is comparative across species and to use humans as the basis for comparison is a continuation of pre-Darwinian ideas of a scala naturae dealing with intelligence. [9]If ranking species in a single phylogenetic line according to criteria based on the extant member is questionable, then certainly since ecological conditions and selection pressures change over time, ranking contemporary species separated by millions of years of evolution based on the traits exhibited by one is unjustifiable. [10]To assume a continuum of intelligence across today's species is incompatible with an evolutionary perspective, and this preconception must not be allowed to guide studies of brain evolution. [11]The information-processing systems of different animals have been designed to respond to different stimuli, diverse ""cognitive substrates,"" and therefore expectations of an interspecific regularity between these IPS and various other body measures are ill-conceived.
[12]What # lacking # a good definition # intelligence that will allow us # say something # how an animal copes # its own ecology and not how closely # approximates human behavior. [13]There are undeniable trends in the history of life -- towards larger brains in mammals and larger neocortices in primates -- but to generalize correlations of these trends into a concept of intelligence should not be attempted until an accurate definition is developed. [14]Until that time, the most that comparative brain size studies can do is demonstrate correlations and thereby pose questions for scientists who focus on the evolution of species with one of these correlated characteristics.
Q. The author’s suggestion about brain studies towards the end of the passage is
[1]Studies of brain evolution are compelling because of their implications for understanding human evolution. [2]Consequently, researchers are motivated by a desire to find the causes of intelligence. [3]What is intelligence? [4]It is inevitably described with respect to human attributes; we consider ourselves intelligent, and we therefore compare other species to ourselves. [5]This view is legitimized by the fact that humans do have very sophisticated brains, exhibit extraordinarily complex behavior, and cope well in novel situations, generalizing from one problem to another.
[6]Unfortunately, criteria applicable to humans are not necessarily appropriate for evaluating traits of other organisms. [7]There is no basis for the assumption that all intelligence is human-like intelligence, nor even for the preconception that all primate intelligence is human-like. [8]To say that intellectual prowess is comparative across species and to use humans as the basis for comparison is a continuation of pre-Darwinian ideas of a scala naturae dealing with intelligence. [9]If ranking species in a single phylogenetic line according to criteria based on the extant member is questionable, then certainly since ecological conditions and selection pressures change over time, ranking contemporary species separated by millions of years of evolution based on the traits exhibited by one is unjustifiable. [10]To assume a continuum of intelligence across today's species is incompatible with an evolutionary perspective, and this preconception must not be allowed to guide studies of brain evolution. [11]The information-processing systems of different animals have been designed to respond to different stimuli, diverse ""cognitive substrates,"" and therefore expectations of an interspecific regularity between these IPS and various other body measures are ill-conceived.
[12]What # lacking # a good definition # intelligence that will allow us # say something # how an animal copes # its own ecology and not how closely # approximates human behavior. [13]There are undeniable trends in the history of life -- towards larger brains in mammals and larger neocortices in primates -- but to generalize correlations of these trends into a concept of intelligence should not be attempted until an accurate definition is developed. [14]Until that time, the most that comparative brain size studies can do is demonstrate correlations and thereby pose questions for scientists who focus on the evolution of species with one of these correlated characteristics.
Q. What do you mean by 'scala naturae'?
[1]Studies of brain evolution are compelling because of their implications for understanding human evolution. [2]Consequently, researchers are motivated by a desire to find the causes of intelligence. [3]What is intelligence? [4]It is inevitably described with respect to human attributes; we consider ourselves intelligent, and we therefore compare other species to ourselves. [5]This view is legitimized by the fact that humans do have very sophisticated brains, exhibit extraordinarily complex behavior, and cope well in novel situations, generalizing from one problem to another.
[6]Unfortunately, criteria applicable to humans are not necessarily appropriate for evaluating traits of other organisms. [7]There is no basis for the assumption that all intelligence is human-like intelligence, nor even for the preconception that all primate intelligence is human-like. [8]To say that intellectual prowess is comparative across species and to use humans as the basis for comparison is a continuation of pre-Darwinian ideas of a scala naturae dealing with intelligence. [9]If ranking species in a single phylogenetic line according to criteria based on the extant member is questionable, then certainly since ecological conditions and selection pressures change over time, ranking contemporary species separated by millions of years of evolution based on the traits exhibited by one is unjustifiable. [10]To assume a continuum of intelligence across today's species is incompatible with an evolutionary perspective, and this preconception must not be allowed to guide studies of brain evolution. [11]The information-processing systems of different animals have been designed to respond to different stimuli, diverse ""cognitive substrates,"" and therefore expectations of an interspecific regularity between these IPS and various other body measures are ill-conceived.
[12]What # lacking # a good definition # intelligence that will allow us # say something # how an animal copes # its own ecology and not how closely # approximates human behavior. [13]There are undeniable trends in the history of life -- towards larger brains in mammals and larger neocortices in primates -- but to generalize correlations of these trends into a concept of intelligence should not be attempted until an accurate definition is developed. [14]Until that time, the most that comparative brain size studies can do is demonstrate correlations and thereby pose questions for scientists who focus on the evolution of species with one of these correlated characteristics.
Q. Which set of words below contains the correct set of antonyms for all of the following words? Sophisticated, continuation, contemporary, diverse
[1]Studies of brain evolution are compelling because of their implications for understanding human evolution. [2]Consequently, researchers are motivated by a desire to find the causes of intelligence. [3]What is intelligence? [4]It is inevitably described with respect to human attributes; we consider ourselves intelligent, and we therefore compare other species to ourselves. [5]This view is legitimized by the fact that humans do have very sophisticated brains, exhibit extraordinarily complex behavior, and cope well in novel situations, generalizing from one problem to another.
[6]Unfortunately, criteria applicable to humans are not necessarily appropriate for evaluating traits of other organisms. [7]There is no basis for the assumption that all intelligence is human-like intelligence, nor even for the preconception that all primate intelligence is human-like. [8]To say that intellectual prowess is comparative across species and to use humans as the basis for comparison is a continuation of pre-Darwinian ideas of a scala naturae dealing with intelligence. [9]If ranking species in a single phylogenetic line according to criteria based on the extant member is questionable, then certainly since ecological conditions and selection pressures change over time, ranking contemporary species separated by millions of years of evolution based on the traits exhibited by one is unjustifiable. [10]To assume a continuum of intelligence across today's species is incompatible with an evolutionary perspective, and this preconception must not be allowed to guide studies of brain evolution. [11]The information-processing systems of different animals have been designed to respond to different stimuli, diverse ""cognitive substrates,"" and therefore expectations of an interspecific regularity between these IPS and various other body measures are ill-conceived.
[12]What # lacking # a good definition # intelligence that will allow us # say something # how an animal copes # its own ecology and not how closely # approximates human behavior. [13]There are undeniable trends in the history of life -- towards larger brains in mammals and larger neocortices in primates -- but to generalize correlations of these trends into a concept of intelligence should not be attempted until an accurate definition is developed. [14]Until that time, the most that comparative brain size studies can do is demonstrate correlations and thereby pose questions for scientists who focus on the evolution of species with one of these correlated characteristics.
Q. Which of the following contains the correct sequence of missing words in the sentence [12]? (Missing words indicated by ‘#’.)
[1]Part of the confidence, with which artificial intelligence researchers view the prospects of their field stems from the materialist assumptions they make. [2]One is that "mind" is simply a name for the information-processing activity of the brain. Another is that the brain is a physical entity that acts according to the laws of biochemistry and is not influenced by any irreducible "soul" or other unitary, purely mental entity that is incapable of analysis as a causal sequence of elementary biochemical events. [3]This broadly accepted view, together with the rapidly mounting mass of information concerning nervous system physiology, microanatomy, and signaling behavior and with the current technology-based push to construct analogous computing systems involving thousands of elements acting in parallel, has encouraged a shift in emphasis among AI researchers that has come to be identified as "the new connectionism."
[4]The emphases that characterizes this school of thought are as follows:
[5]Firstly, the brain operates not as a serial computer of conventional type but in enormously parallel fashion. [6]The parallel functioning of hundreds of thousands or millions of neurons in the brain's subtle information-extraction processes attains speed. [7]Coherent percepts are formed in times that exceed the elementary reaction times of single neurons by little more than a factor of ten. [8]Especially for basic perceptual processes like sight, this observation rules out iterative forms of information processing that would have to scan incoming data serially or pass it through many intermediate processing stages. [9]Since extensive serial symbolic search operations of this type do not seem to characterize the functioning of the senses, the assumption (typical for much of the AI-inspired cognitive science speculation of the 1960-80 period) that serial search underlies various higher cognitive functions becomes suspect.
[10]Secondly, within the brain, knowledge is stored not in any form resembling a conventional computer program but structurally, as distributed patterns of excitatory and inhibitory synaptic strengths whose relative sizes determine the flow of neural responses that constitutes perception and thought.
[11]AI researchers developing these views have been drawn to involvement in neuroscience by the hope of being able to contribute theoretical insights that could give meaning to the rapidly growing, but still bewildering, mass of empirical data being gathered by experimental neuroscientists (many of whom regard theoretical speculation with more than a little disdain). [12]These AI researchers hope to combine clues drawn from experiment with the computer scientists' practiced ability to analyze complex external functions into patterns of elementary actions. [13]By assuming some general form for the computational activities characteristic of these actions, they hope to guess something illuminating about the way in which the perceptual and cognitive workings of the brain arise.
Q. According to the AI researchers, which of the following is (are) true about the mind?
I. Mind can be analysed as a causal sequence of elementary biochemical events
II. Functioning of the senses cannot be performed in iterative form of information processing.
III. Knowledge is stored in brain in the form of traditional computer system.
[1]Part of the confidence, with which artificial intelligence researchers view the prospects of their field stems from the materialist assumptions they make. [2]One is that "mind" is simply a name for the information-processing activity of the brain. Another is that the brain is a physical entity that acts according to the laws of biochemistry and is not influenced by any irreducible "soul" or other unitary, purely mental entity that is incapable of analysis as a causal sequence of elementary biochemical events. [3]This broadly accepted view, together with the rapidly mounting mass of information concerning nervous system physiology, microanatomy, and signaling behavior and with the current technology-based push to construct analogous computing systems involving thousands of elements acting in parallel, has encouraged a shift in emphasis among AI researchers that has come to be identified as "the new connectionism."
[4]The emphases that characterizes this school of thought are as follows:
[5]Firstly, the brain operates not as a serial computer of conventional type but in enormously parallel fashion. [6]The parallel functioning of hundreds of thousands or millions of neurons in the brain's subtle information-extraction processes attains speed. [7]Coherent percepts are formed in times that exceed the elementary reaction times of single neurons by little more than a factor of ten. [8]Especially for basic perceptual processes like sight, this observation rules out iterative forms of information processing that would have to scan incoming data serially or pass it through many intermediate processing stages. [9]Since extensive serial symbolic search operations of this type do not seem to characterize the functioning of the senses, the assumption (typical for much of the AI-inspired cognitive science speculation of the 1960-80 period) that serial search underlies various higher cognitive functions becomes suspect.
[10]Secondly, within the brain, knowledge is stored not in any form resembling a conventional computer program but structurally, as distributed patterns of excitatory and inhibitory synaptic strengths whose relative sizes determine the flow of neural responses that constitutes perception and thought.
[11]AI researchers developing these views have been drawn to involvement in neuroscience by the hope of being able to contribute theoretical insights that could give meaning to the rapidly growing, but still bewildering, mass of empirical data being gathered by experimental neuroscientists (many of whom regard theoretical speculation with more than a little disdain). [12]These AI researchers hope to combine clues drawn from experiment with the computer scientists' practiced ability to analyze complex external functions into patterns of elementary actions. [13]By assuming some general form for the computational activities characteristic of these actions, they hope to guess something illuminating about the way in which the perceptual and cognitive workings of the brain arise.
Q. Which of the following best explains the organization of the paragraph?
[1]Part of the confidence, with which artificial intelligence researchers view the prospects of their field stems from the materialist assumptions they make. [2]One is that "mind" is simply a name for the information-processing activity of the brain. Another is that the brain is a physical entity that acts according to the laws of biochemistry and is not influenced by any irreducible "soul" or other unitary, purely mental entity that is incapable of analysis as a causal sequence of elementary biochemical events. [3]This broadly accepted view, together with the rapidly mounting mass of information concerning nervous system physiology, microanatomy, and signaling behavior and with the current technology-based push to construct analogous computing systems involving thousands of elements acting in parallel, has encouraged a shift in emphasis among AI researchers that has come to be identified as "the new connectionism."
[4]The emphases that characterizes this school of thought are as follows:
[5]Firstly, the brain operates not as a serial computer of conventional type but in enormously parallel fashion. [6]The parallel functioning of hundreds of thousands or millions of neurons in the brain's subtle information-extraction processes attains speed. [7]Coherent percepts are formed in times that exceed the elementary reaction times of single neurons by little more than a factor of ten. [8]Especially for basic perceptual processes like sight, this observation rules out iterative forms of information processing that would have to scan incoming data serially or pass it through many intermediate processing stages. [9]Since extensive serial symbolic search operations of this type do not seem to characterize the functioning of the senses, the assumption (typical for much of the AI-inspired cognitive science speculation of the 1960-80 period) that serial search underlies various higher cognitive functions becomes suspect.
[10]Secondly, within the brain, knowledge is stored not in any form resembling a conventional computer program but structurally, as distributed patterns of excitatory and inhibitory synaptic strengths whose relative sizes determine the flow of neural responses that constitutes perception and thought.
[11]AI researchers developing these views have been drawn to involvement in neuroscience by the hope of being able to contribute theoretical insights that could give meaning to the rapidly growing, but still bewildering, mass of empirical data being gathered by experimental neuroscientists (many of whom regard theoretical speculation with more than a little disdain). [12]These AI researchers hope to combine clues drawn from experiment with the computer scientists' practiced ability to analyze complex external functions into patterns of elementary actions. [13]By assuming some general form for the computational activities characteristic of these actions, they hope to guess something illuminating about the way in which the perceptual and cognitive workings of the brain arise.
Q. Which of the following is the best title of the passage?
[1]Part of the confidence, with which artificial intelligence researchers view the prospects of their field stems from the materialist assumptions they make. [2]One is that "mind" is simply a name for the information-processing activity of the brain. Another is that the brain is a physical entity that acts according to the laws of biochemistry and is not influenced by any irreducible "soul" or other unitary, purely mental entity that is incapable of analysis as a causal sequence of elementary biochemical events. [3]This broadly accepted view, together with the rapidly mounting mass of information concerning nervous system physiology, microanatomy, and signaling behavior and with the current technology-based push to construct analogous computing systems involving thousands of elements acting in parallel, has encouraged a shift in emphasis among AI researchers that has come to be identified as "the new connectionism."
[4]The emphases that characterizes this school of thought are as follows:
[5]Firstly, the brain operates not as a serial computer of conventional type but in enormously parallel fashion. [6]The parallel functioning of hundreds of thousands or millions of neurons in the brain's subtle information-extraction processes attains speed. [7]Coherent percepts are formed in times that exceed the elementary reaction times of single neurons by little more than a factor of ten. [8]Especially for basic perceptual processes like sight, this observation rules out iterative forms of information processing that would have to scan incoming data serially or pass it through many intermediate processing stages. [9]Since extensive serial symbolic search operations of this type do not seem to characterize the functioning of the senses, the assumption (typical for much of the AI-inspired cognitive science speculation of the 1960-80 period) that serial search underlies various higher cognitive functions becomes suspect.
[10]Secondly, within the brain, knowledge is stored not in any form resembling a conventional computer program but structurally, as distributed patterns of excitatory and inhibitory synaptic strengths whose relative sizes determine the flow of neural responses that constitutes perception and thought.
[11]AI researchers developing these views have been drawn to involvement in neuroscience by the hope of being able to contribute theoretical insights that could give meaning to the rapidly growing, but still bewildering, mass of empirical data being gathered by experimental neuroscientists (many of whom regard theoretical speculation with more than a little disdain). [12]These AI researchers hope to combine clues drawn from experiment with the computer scientists' practiced ability to analyze complex external functions into patterns of elementary actions. [13]By assuming some general form for the computational activities characteristic of these actions, they hope to guess something illuminating about the way in which the perceptual and cognitive workings of the brain arise.
Q. Neuroscientists would most likely agree to which of the following?
[1]Part of the confidence, with which artificial intelligence researchers view the prospects of their field stems from the materialist assumptions they make. [2]One is that "mind" is simply a name for the information-processing activity of the brain. Another is that the brain is a physical entity that acts according to the laws of biochemistry and is not influenced by any irreducible "soul" or other unitary, purely mental entity that is incapable of analysis as a causal sequence of elementary biochemical events. [3]This broadly accepted view, together with the rapidly mounting mass of information concerning nervous system physiology, microanatomy, and signaling behavior and with the current technology-based push to construct analogous computing systems involving thousands of elements acting in parallel, has encouraged a shift in emphasis among AI researchers that has come to be identified as "the new connectionism."
[4]The emphases that characterizes this school of thought are as follows:
[5]Firstly, the brain operates not as a serial computer of conventional type but in enormously parallel fashion. [6]The parallel functioning of hundreds of thousands or millions of neurons in the brain's subtle information-extraction processes attains speed. [7]Coherent percepts are formed in times that exceed the elementary reaction times of single neurons by little more than a factor of ten. [8]Especially for basic perceptual processes like sight, this observation rules out iterative forms of information processing that would have to scan incoming data serially or pass it through many intermediate processing stages. [9]Since extensive serial symbolic search operations of this type do not seem to characterize the functioning of the senses, the assumption (typical for much of the AI-inspired cognitive science speculation of the 1960-80 period) that serial search underlies various higher cognitive functions becomes suspect.
[10]Secondly, within the brain, knowledge is stored not in any form resembling a conventional computer program but structurally, as distributed patterns of excitatory and inhibitory synaptic strengths whose relative sizes determine the flow of neural responses that constitutes perception and thought.
[11]AI researchers developing these views have been drawn to involvement in neuroscience by the hope of being able to contribute theoretical insights that could give meaning to the rapidly growing, but still bewildering, mass of empirical data being gathered by experimental neuroscientists (many of whom regard theoretical speculation with more than a little disdain). [12]These AI researchers hope to combine clues drawn from experiment with the computer scientists' practiced ability to analyze complex external functions into patterns of elementary actions. [13]By assuming some general form for the computational activities characteristic of these actions, they hope to guess something illuminating about the way in which the perceptual and cognitive workings of the brain arise.
Q. Which of the following is the meaning of the word "illuminating" as used in the context of the paragraph in Sentence 13?
[1]Part of the confidence, with which artificial intelligence researchers view the prospects of their field stems from the materialist assumptions they make. [2]One is that "mind" is simply a name for the information-processing activity of the brain. Another is that the brain is a physical entity that acts according to the laws of biochemistry and is not influenced by any irreducible "soul" or other unitary, purely mental entity that is incapable of analysis as a causal sequence of elementary biochemical events. [3]This broadly accepted view, together with the rapidly mounting mass of information concerning nervous system physiology, microanatomy, and signaling behavior and with the current technology-based push to construct analogous computing systems involving thousands of elements acting in parallel, has encouraged a shift in emphasis among AI researchers that has come to be identified as "the new connectionism."
[4]The emphases that characterizes this school of thought are as follows:
[5]Firstly, the brain operates not as a serial computer of conventional type but in enormously parallel fashion. [6]The parallel functioning of hundreds of thousands or millions of neurons in the brain's subtle information-extraction processes attains speed. [7]Coherent percepts are formed in times that exceed the elementary reaction times of single neurons by little more than a factor of ten. [8]Especially for basic perceptual processes like sight, this observation rules out iterative forms of information processing that would have to scan incoming data serially or pass it through many intermediate processing stages. [9]Since extensive serial symbolic search operations of this type do not seem to characterize the functioning of the senses, the assumption (typical for much of the AI-inspired cognitive science speculation of the 1960-80 period) that serial search underlies various higher cognitive functions becomes suspect.
[10]Secondly, within the brain, knowledge is stored not in any form resembling a conventional computer program but structurally, as distributed patterns of excitatory and inhibitory synaptic strengths whose relative sizes determine the flow of neural responses that constitutes perception and thought.
[11]AI researchers developing these views have been drawn to involvement in neuroscience by the hope of being able to contribute theoretical insights that could give meaning to the rapidly growing, but still bewildering, mass of empirical data being gathered by experimental neuroscientists (many of whom regard theoretical speculation with more than a little disdain). [12]These AI researchers hope to combine clues drawn from experiment with the computer scientists' practiced ability to analyze complex external functions into patterns of elementary actions. [13]By assuming some general form for the computational activities characteristic of these actions, they hope to guess something illuminating about the way in which the perceptual and cognitive workings of the brain arise.
Q. All the sentences in the above passage are grammatically correct in the context of the passage, except -
Since the late 1970’s, faced with severe loss of market share in dozens of industries, manufacturers in the US have been trying to improve productivity—and therefore enhance their international competitiveness—through cost-cutting programs. (Cost-cutting here is defined as raising labor output while holding the amount of labor constant.) However, from 1978 through 1982, productivity—the value of goods manufactured divided by the amount of labor—did not improve; and while the results were better in the business upturn of the three years following, they ran 25 percent lower than productivity improvements during earlier, post-1945 upturns. ##At the same time, it became clear that the harder manufacturers worked to implement cost-cutting, the more they lost their competitive edge.
When I recently visited 25 companies; it became clear to me that the cost-cutting approach to increasing productivity is fundamentally flawed. Manufacturing regularly observes a “40, 40, 20” rule. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and in approaches to materials. Another 40 percent comes from major changes in equipment and process technology. The final 20 percent rests on implementing conventional cost-cutting. This does not mean cost-cutting should not be tried. Approaches like simplifying jobs and retraining employees to work smarter, not harder—do produce results. But the tools quickly reach the limits of what they can contribute.
Cost-cutting approach hinders innovation and discourages creative people. An industry can easily become prisoner of its own investments in cost-cutting techniques, reducing its ability to develop new products. Managers under pressure to maximize cost-cutting will resist innovation because they know that more fundamental changes in processes or systems will wreak havoc with the results on which they are measured. Production managers have always seen their job as one of minimizing costs and maximizing output. This dimension of performance has created a penny-pinching, mechanistic culture in most factories that has kept away creative managers. Successful companies have overcome this problem by developing and implementing a strategy that focuses on the manufacturing structure and on equipment and process technology. In one company a manufacturing strategy that allowed different areas of the factory to specialize in different markets replaced the conventional cost-cutting approach; within three years the company regained its competitive advantage. Together with such strategies, successful companies are also encouraging managers to focus on a wider set of objectives besides cutting costs. There is hope for manufacturing, but it clearly rests on a different way of managing.
Q. As inferred from the first paragraph, the manufacturers expected that the measures they implemented would
Since the late 1970’s, faced with severe loss of market share in dozens of industries, manufacturers in the US have been trying to improve productivity—and therefore enhance their international competitiveness—through cost-cutting programs. (Cost-cutting here is defined as raising labor output while holding the amount of labor constant.) However, from 1978 through 1982, productivity—the value of goods manufactured divided by the amount of labor—did not improve; and while the results were better in the business upturn of the three years following, they ran 25 percent lower than productivity improvements during earlier, post-1945 upturns. ##At the same time, it became clear that the harder manufacturers worked to implement cost-cutting, the more they lost their competitive edge.
When I recently visited 25 companies; it became clear to me that the cost-cutting approach to increasing productivity is fundamentally flawed. Manufacturing regularly observes a “40, 40, 20” rule. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and in approaches to materials. Another 40 percent comes from major changes in equipment and process technology. The final 20 percent rests on implementing conventional cost-cutting. This does not mean cost-cutting should not be tried. Approaches like simplifying jobs and retraining employees to work smarter, not harder—do produce results. But the tools quickly reach the limits of what they can contribute.
Cost-cutting approach hinders innovation and discourages creative people. An industry can easily become prisoner of its own investments in cost-cutting techniques, reducing its ability to develop new products. Managers under pressure to maximize cost-cutting will resist innovation because they know that more fundamental changes in processes or systems will wreak havoc with the results on which they are measured. Production managers have always seen their job as one of minimizing costs and maximizing output. This dimension of performance has created a penny-pinching, mechanistic culture in most factories that has kept away creative managers. Successful companies have overcome this problem by developing and implementing a strategy that focuses on the manufacturing structure and on equipment and process technology. In one company a manufacturing strategy that allowed different areas of the factory to specialize in different markets replaced the conventional cost-cutting approach; within three years the company regained its competitive advantage. Together with such strategies, successful companies are also encouraging managers to focus on a wider set of objectives besides cutting costs. There is hope for manufacturing, but it clearly rests on a different way of managing.
Q. The primary function of the first paragraph is to
Since the late 1970’s, faced with severe loss of market share in dozens of industries, manufacturers in the US have been trying to improve productivity—and therefore enhance their international competitiveness—through cost-cutting programs. (Cost-cutting here is defined as raising labor output while holding the amount of labor constant.) However, from 1978 through 1982, productivity—the value of goods manufactured divided by the amount of labor—did not improve; and while the results were better in the business upturn of the three years following, they ran 25 percent lower than productivity improvements during earlier, post-1945 upturns. ##At the same time, it became clear that the harder manufacturers worked to implement cost-cutting, the more they lost their competitive edge.
When I recently visited 25 companies; it became clear to me that the cost-cutting approach to increasing productivity is fundamentally flawed. Manufacturing regularly observes a “40, 40, 20” rule. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and in approaches to materials. Another 40 percent comes from major changes in equipment and process technology. The final 20 percent rests on implementing conventional cost-cutting. This does not mean cost-cutting should not be tried. Approaches like simplifying jobs and retraining employees to work smarter, not harder—do produce results. But the tools quickly reach the limits of what they can contribute.
Cost-cutting approach hinders innovation and discourages creative people. An industry can easily become prisoner of its own investments in cost-cutting techniques, reducing its ability to develop new products. Managers under pressure to maximize cost-cutting will resist innovation because they know that more fundamental changes in processes or systems will wreak havoc with the results on which they are measured. Production managers have always seen their job as one of minimizing costs and maximizing output. This dimension of performance has created a penny-pinching, mechanistic culture in most factories that has kept away creative managers. Successful companies have overcome this problem by developing and implementing a strategy that focuses on the manufacturing structure and on equipment and process technology. In one company a manufacturing strategy that allowed different areas of the factory to specialize in different markets replaced the conventional cost-cutting approach; within three years the company regained its competitive advantage. Together with such strategies, successful companies are also encouraging managers to focus on a wider set of objectives besides cutting costs. There is hope for manufacturing, but it clearly rests on a different way of managing.
Q. The author’s attitude toward the culture in most factories is best described as
Since the late 1970’s, faced with severe loss of market share in dozens of industries, manufacturers in the US have been trying to improve productivity—and therefore enhance their international competitiveness—through cost-cutting programs. (Cost-cutting here is defined as raising labor output while holding the amount of labor constant.) However, from 1978 through 1982, productivity—the value of goods manufactured divided by the amount of labor—did not improve; and while the results were better in the business upturn of the three years following, they ran 25 percent lower than productivity improvements during earlier, post-1945 upturns. ##At the same time, it became clear that the harder manufacturers worked to implement cost-cutting, the more they lost their competitive edge.
When I recently visited 25 companies; it became clear to me that the cost-cutting approach to increasing productivity is fundamentally flawed. Manufacturing regularly observes a “40, 40, 20” rule. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and in approaches to materials. Another 40 percent comes from major changes in equipment and process technology. The final 20 percent rests on implementing conventional cost-cutting. This does not mean cost-cutting should not be tried. Approaches like simplifying jobs and retraining employees to work smarter, not harder—do produce results. But the tools quickly reach the limits of what they can contribute.
Cost-cutting approach hinders innovation and discourages creative people. An industry can easily become prisoner of its own investments in cost-cutting techniques, reducing its ability to develop new products. Managers under pressure to maximize cost-cutting will resist innovation because they know that more fundamental changes in processes or systems will wreak havoc with the results on which they are measured. Production managers have always seen their job as one of minimizing costs and maximizing output. This dimension of performance has created a penny-pinching, mechanistic culture in most factories that has kept away creative managers. Successful companies have overcome this problem by developing and implementing a strategy that focuses on the manufacturing structure and on equipment and process technology. In one company a manufacturing strategy that allowed different areas of the factory to specialize in different markets replaced the conventional cost-cutting approach; within three years the company regained its competitive advantage. Together with such strategies, successful companies are also encouraging managers to focus on a wider set of objectives besides cutting costs. There is hope for manufacturing, but it clearly rests on a different way of managing.
Q. In the passage, the author includes all of the following EXCEPT
Since the late 1970’s, faced with severe loss of market share in dozens of industries, manufacturers in the US have been trying to improve productivity—and therefore enhance their international competitiveness—through cost-cutting programs. (Cost-cutting here is defined as raising labor output while holding the amount of labor constant.) However, from 1978 through 1982, productivity—the value of goods manufactured divided by the amount of labor—did not improve; and while the results were better in the business upturn of the three years following, they ran 25 percent lower than productivity improvements during earlier, post-1945 upturns. ##At the same time, it became clear that the harder manufacturers worked to implement cost-cutting, the more they lost their competitive edge.
When I recently visited 25 companies; it became clear to me that the cost-cutting approach to increasing productivity is fundamentally flawed. Manufacturing regularly observes a “40, 40, 20” rule. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and in approaches to materials. Another 40 percent comes from major changes in equipment and process technology. The final 20 percent rests on implementing conventional cost-cutting. This does not mean cost-cutting should not be tried. Approaches like simplifying jobs and retraining employees to work smarter, not harder—do produce results. But the tools quickly reach the limits of what they can contribute.
Cost-cutting approach hinders innovation and discourages creative people. An industry can easily become prisoner of its own investments in cost-cutting techniques, reducing its ability to develop new products. Managers under pressure to maximize cost-cutting will resist innovation because they know that more fundamental changes in processes or systems will wreak havoc with the results on which they are measured. Production managers have always seen their job as one of minimizing costs and maximizing output. This dimension of performance has created a penny-pinching, mechanistic culture in most factories that has kept away creative managers. Successful companies have overcome this problem by developing and implementing a strategy that focuses on the manufacturing structure and on equipment and process technology. In one company a manufacturing strategy that allowed different areas of the factory to specialize in different markets replaced the conventional cost-cutting approach; within three years the company regained its competitive advantage. Together with such strategies, successful companies are also encouraging managers to focus on a wider set of objectives besides cutting costs. There is hope for manufacturing, but it clearly rests on a different way of managing.
Q. The author suggests that implementing conventional cost-cutting as a way of increasing manufacturing competitiveness is a strategy that
Since the late 1970’s, faced with severe loss of market share in dozens of industries, manufacturers in the US have been trying to improve productivity—and therefore enhance their international competitiveness—through cost-cutting programs. (Cost-cutting here is defined as raising labor output while holding the amount of labor constant.) However, from 1978 through 1982, productivity—the value of goods manufactured divided by the amount of labor—did not improve; and while the results were better in the business upturn of the three years following, they ran 25 percent lower than productivity improvements during earlier, post-1945 upturns. ##At the same time, it became clear that the harder manufacturers worked to implement cost-cutting, the more they lost their competitive edge.
When I recently visited 25 companies; it became clear to me that the cost-cutting approach to increasing productivity is fundamentally flawed. Manufacturing regularly observes a “40, 40, 20” rule. Roughly 40 percent of any manufacturing-based competitive advantage derives from long-term changes in manufacturing structure (decisions about the number, size, location, and capacity of facilities) and in approaches to materials. Another 40 percent comes from major changes in equipment and process technology. The final 20 percent rests on implementing conventional cost-cutting. This does not mean cost-cutting should not be tried. Approaches like simplifying jobs and retraining employees to work smarter, not harder—do produce results. But the tools quickly reach the limits of what they can contribute.
Cost-cutting approach hinders innovation and discourages creative people. An industry can easily become prisoner of its own investments in cost-cutting techniques, reducing its ability to develop new products. Managers under pressure to maximize cost-cutting will resist innovation because they know that more fundamental changes in processes or systems will wreak havoc with the results on which they are measured. Production managers have always seen their job as one of minimizing costs and maximizing output. This dimension of performance has created a penny-pinching, mechanistic culture in most factories that has kept away creative managers. Successful companies have overcome this problem by developing and implementing a strategy that focuses on the manufacturing structure and on equipment and process technology. In one company a manufacturing strategy that allowed different areas of the factory to specialize in different markets replaced the conventional cost-cutting approach; within three years the company regained its competitive advantage. Together with such strategies, successful companies are also encouraging managers to focus on a wider set of objectives besides cutting costs. There is hope for manufacturing, but it clearly rests on a different way of managing.
Q. Which figure of speech is used in the sentence enclosed within **?
Agriculture has no single, simple origin. A wide variety of plants and animals have been independently domesticated at different times and in numerous places. The first agriculture appears to have developed at the closing of the last Pleistocene glacial period, or Ice Age (about 11,700 years ago). At that time temperatures warmed, glaciers melted, sea levels rose, and ecosystems throughout the world reorganized. The changes were more dramatic in temperate regions than in the tropics. Although global climate change played a role in the development of agriculture, it does not account for the complex and diverse cultural responses that ensued, the specific timing of the appearance of agricultural communities in different regions, or the specific regional impact of climate change on local environments. By studying populations that did not develop intensive agriculture or certain cultigens, such as wheat and rice, archaeologists narrow the search for causes. For instance, Australian Aborigines and many of the Native American peoples of western North America developed complex methods to manage diverse sets of plants and animals, often including (but not limited to) cultivation. These practices may be representative of activities common in some parts of the world before 15,000 years ago. Plant and animal management was and is a familiar concept within hunting and gathering cultures, but it took on new dimensions as natural selection and mutation produced phenotypes that were increasingly reliant upon people. Because some resource management practices, such as intensively tending non-domesticated nut-bearing trees, bridge the boundary between foraging and farming, archaeologists investigating agricultural origins generally frame their work in terms of a continuum of subsistence practices.
Notably, agriculture does not appear to have developed in particularly impoverished settings; domestication does not seem to have been a response to food scarcity or deprivation. In fact, quite the opposite appears to be the case. It was once thought that human population pressure was a significant factor in the process, but research indicated by the late 20th century that populations rose significantly only after people had established food production. Instead, it is thought that—at least initially—the new animals and plants that were developed through domestication may have helped to maintain ways of life that emphasized hunting and gathering by providing insurance in lean seasons. When considered in terms of food management, dogs may have been initially domesticated as hunting companions, while meat and milk could be obtained more reliably from herds of sheep, goats, reindeer, or cattle than from their wild counterparts or other game animals. Domestication made resource planning [X] more predictable exercise, in regions that combined extreme seasonal variation and rich natural resource abundance.
Q. Which of the following is not true regarding the origins of Agriculture?
Agriculture has no single, simple origin. A wide variety of plants and animals have been independently domesticated at different times and in numerous places. The first agriculture appears to have developed at the closing of the last Pleistocene glacial period, or Ice Age (about 11,700 years ago). At that time temperatures warmed, glaciers melted, sea levels rose, and ecosystems throughout the world reorganized. The changes were more dramatic in temperate regions than in the tropics. Although global climate change played a role in the development of agriculture, it does not account for the complex and diverse cultural responses that ensued, the specific timing of the appearance of agricultural communities in different regions, or the specific regional impact of climate change on local environments. By studying populations that did not develop intensive agriculture or certain cultigens, such as wheat and rice, archaeologists narrow the search for causes. For instance, Australian Aborigines and many of the Native American peoples of western North America developed complex methods to manage diverse sets of plants and animals, often including (but not limited to) cultivation. These practices may be representative of activities common in some parts of the world before 15,000 years ago. Plant and animal management was and is a familiar concept within hunting and gathering cultures, but it took on new dimensions as natural selection and mutation produced phenotypes that were increasingly reliant upon people. Because some resource management practices, such as intensively tending non-domesticated nut-bearing trees, bridge the boundary between foraging and farming, archaeologists investigating agricultural origins generally frame their work in terms of a continuum of subsistence practices.
Notably, agriculture does not appear to have developed in particularly impoverished settings; domestication does not seem to have been a response to food scarcity or deprivation. In fact, quite the opposite appears to be the case. It was once thought that human population pressure was a significant factor in the process, but research indicated by the late 20th century that populations rose significantly only after people had established food production. Instead, it is thought that—at least initially—the new animals and plants that were developed through domestication may have helped to maintain ways of life that emphasized hunting and gathering by providing insurance in lean seasons. When considered in terms of food management, dogs may have been initially domesticated as hunting companions, while meat and milk could be obtained more reliably from herds of sheep, goats, reindeer, or cattle than from their wild counterparts or other game animals. Domestication made resource planning [X] more predictable exercise, in regions that combined extreme seasonal variation and rich natural resource abundance.
Q. Why did archaeologists study the civilisations that had not developed any intensive culture of agriculture?
Agriculture has no single, simple origin. A wide variety of plants and animals have been independently domesticated at different times and in numerous places. The first agriculture appears to have developed at the closing of the last Pleistocene glacial period, or Ice Age (about 11,700 years ago). At that time temperatures warmed, glaciers melted, sea levels rose, and ecosystems throughout the world reorganized. The changes were more dramatic in temperate regions than in the tropics. Although global climate change played a role in the development of agriculture, it does not account for the complex and diverse cultural responses that ensued, the specific timing of the appearance of agricultural communities in different regions, or the specific regional impact of climate change on local environments. By studying populations that did not develop intensive agriculture or certain cultigens, such as wheat and rice, archaeologists narrow the search for causes. For instance, Australian Aborigines and many of the Native American peoples of western North America developed complex methods to manage diverse sets of plants and animals, often including (but not limited to) cultivation. These practices may be representative of activities common in some parts of the world before 15,000 years ago. Plant and animal management was and is a familiar concept within hunting and gathering cultures, but it took on new dimensions as natural selection and mutation produced phenotypes that were increasingly reliant upon people. Because some resource management practices, such as intensively tending non-domesticated nut-bearing trees, bridge the boundary between foraging and farming, archaeologists investigating agricultural origins generally frame their work in terms of a continuum of subsistence practices.
Notably, agriculture does not appear to have developed in particularly impoverished settings; domestication does not seem to have been a response to food scarcity or deprivation. In fact, quite the opposite appears to be the case. It was once thought that human population pressure was a significant factor in the process, but research indicated by the late 20th century that populations rose significantly only after people had established food production. Instead, it is thought that—at least initially—the new animals and plants that were developed through domestication may have helped to maintain ways of life that emphasized hunting and gathering by providing insurance in lean seasons. When considered in terms of food management, dogs may have been initially domesticated as hunting companions, while meat and milk could be obtained more reliably from herds of sheep, goats, reindeer, or cattle than from their wild counterparts or other game animals. Domestication made resource planning [X] more predictable exercise, in regions that combined extreme seasonal variation and rich natural resource abundance.
Q. Which of the following could be inferred from the passage above?
Agriculture has no single, simple origin. A wide variety of plants and animals have been independently domesticated at different times and in numerous places. The first agriculture appears to have developed at the closing of the last Pleistocene glacial period, or Ice Age (about 11,700 years ago). At that time temperatures warmed, glaciers melted, sea levels rose, and ecosystems throughout the world reorganized. The changes were more dramatic in temperate regions than in the tropics. Although global climate change played a role in the development of agriculture, it does not account for the complex and diverse cultural responses that ensued, the specific timing of the appearance of agricultural communities in different regions, or the specific regional impact of climate change on local environments. By studying populations that did not develop intensive agriculture or certain cultigens, such as wheat and rice, archaeologists narrow the search for causes. For instance, Australian Aborigines and many of the Native American peoples of western North America developed complex methods to manage diverse sets of plants and animals, often including (but not limited to) cultivation. These practices may be representative of activities common in some parts of the world before 15,000 years ago. Plant and animal management was and is a familiar concept within hunting and gathering cultures, but it took on new dimensions as natural selection and mutation produced phenotypes that were increasingly reliant upon people. Because some resource management practices, such as intensively tending non-domesticated nut-bearing trees, bridge the boundary between foraging and farming, archaeologists investigating agricultural origins generally frame their work in terms of a continuum of subsistence practices.
Notably, agriculture does not appear to have developed in particularly impoverished settings; domestication does not seem to have been a response to food scarcity or deprivation. In fact, quite the opposite appears to be the case. It was once thought that human population pressure was a significant factor in the process, but research indicated by the late 20th century that populations rose significantly only after people had established food production. Instead, it is thought that—at least initially—the new animals and plants that were developed through domestication may have helped to maintain ways of life that emphasized hunting and gathering by providing insurance in lean seasons. When considered in terms of food management, dogs may have been initially domesticated as hunting companions, while meat and milk could be obtained more reliably from herds of sheep, goats, reindeer, or cattle than from their wild counterparts or other game animals. Domestication made resource planning [X] more predictable exercise, in regions that combined extreme seasonal variation and rich natural resource abundance.
Q. Which of the following is the author most likely to agree with?
Agriculture has no single, simple origin. A wide variety of plants and animals have been independently domesticated at different times and in numerous places. The first agriculture appears to have developed at the closing of the last Pleistocene glacial period, or Ice Age (about 11,700 years ago). At that time temperatures warmed, glaciers melted, sea levels rose, and ecosystems throughout the world reorganized. The changes were more dramatic in temperate regions than in the tropics. Although global climate change played a role in the development of agriculture, it does not account for the complex and diverse cultural responses that ensued, the specific timing of the appearance of agricultural communities in different regions, or the specific regional impact of climate change on local environments. By studying populations that did not develop intensive agriculture or certain cultigens, such as wheat and rice, archaeologists narrow the search for causes. For instance, Australian Aborigines and many of the Native American peoples of western North America developed complex methods to manage diverse sets of plants and animals, often including (but not limited to) cultivation. These practices may be representative of activities common in some parts of the world before 15,000 years ago. Plant and animal management was and is a familiar concept within hunting and gathering cultures, but it took on new dimensions as natural selection and mutation produced phenotypes that were increasingly reliant upon people. Because some resource management practices, such as intensively tending non-domesticated nut-bearing trees, bridge the boundary between foraging and farming, archaeologists investigating agricultural origins generally frame their work in terms of a continuum of subsistence practices.
Notably, agriculture does not appear to have developed in particularly impoverished settings; domestication does not seem to have been a response to food scarcity or deprivation. In fact, quite the opposite appears to be the case. It was once thought that human population pressure was a significant factor in the process, but research indicated by the late 20th century that populations rose significantly only after people had established food production. Instead, it is thought that—at least initially—the new animals and plants that were developed through domestication may have helped to maintain ways of life that emphasized hunting and gathering by providing insurance in lean seasons. When considered in terms of food management, dogs may have been initially domesticated as hunting companions, while meat and milk could be obtained more reliably from herds of sheep, goats, reindeer, or cattle than from their wild counterparts or other game animals. Domestication made resource planning [X] more predictable exercise, in regions that combined extreme seasonal variation and rich natural resource abundance.
Q. Which of the following words is not similar to the word ‘impoverished’?
Agriculture has no single, simple origin. A wide variety of plants and animals have been independently domesticated at different times and in numerous places. The first agriculture appears to have developed at the closing of the last Pleistocene glacial period, or Ice Age (about 11,700 years ago). At that time temperatures warmed, glaciers melted, sea levels rose, and ecosystems throughout the world reorganized. The changes were more dramatic in temperate regions than in the tropics. Although global climate change played a role in the development of agriculture, it does not account for the complex and diverse cultural responses that ensued, the specific timing of the appearance of agricultural communities in different regions, or the specific regional impact of climate change on local environments. By studying populations that did not develop intensive agriculture or certain cultigens, such as wheat and rice, archaeologists narrow the search for causes. For instance, Australian Aborigines and many of the Native American peoples of western North America developed complex methods to manage diverse sets of plants and animals, often including (but not limited to) cultivation. These practices may be representative of activities common in some parts of the world before 15,000 years ago. Plant and animal management was and is a familiar concept within hunting and gathering cultures, but it took on new dimensions as natural selection and mutation produced phenotypes that were increasingly reliant upon people. Because some resource management practices, such as intensively tending non-domesticated nut-bearing trees, bridge the boundary between foraging and farming, archaeologists investigating agricultural origins generally frame their work in terms of a continuum of subsistence practices.
Notably, agriculture does not appear to have developed in particularly impoverished settings; domestication does not seem to have been a response to food scarcity or deprivation. In fact, quite the opposite appears to be the case. It was once thought that human population pressure was a significant factor in the process, but research indicated by the late 20th century that populations rose significantly only after people had established food production. Instead, it is thought that—at least initially—the new animals and plants that were developed through domestication may have helped to maintain ways of life that emphasized hunting and gathering by providing insurance in lean seasons. When considered in terms of food management, dogs may have been initially domesticated as hunting companions, while meat and milk could be obtained more reliably from herds of sheep, goats, reindeer, or cattle than from their wild counterparts or other game animals. Domestication made resource planning [X] more predictable exercise, in regions that combined extreme seasonal variation and rich natural resource abundance.
Q. Which of the following would most appropriately replace the [X] in the last sentence of the passage?