Recent studies on Native American languages point towards an alarming problem in the United States today. Language loss, a global phenomenon, is accelerating among indigenous groups in the United States. A large majority of Native American vernaculars are spoken only by elders and the remainder are fast approaching that status, as growing numbers of children speak only English.
To many, the precedence of one “common” language seems like an achievement of globalisation and hence they argue that it would be wiser to spend resources on improving the English speaking skills of Native Americans rather than resuscitating fading tongues. However, no language is just a collection of words and, therefore, languages are not so simply substitutable for each other. Each language is a unique tool for analyzing and synthesizing the world and to lose such a tool is to forget a way of constructing reality, blotting out a perspective evolved over many generations. Native American languages express ideas on which Native American cultures are anchored; a native language does not just reflect a culture - in a functional sense it is the culture. These languages are based on entirely different histories, scientific and natural-world understandings, spiritual beliefs, and political and legal ideas. They capture concepts that do not exist in English. In essence, they are based on different realities.
Realising the magnitude of this language-loss, most indigenous tribes today are making some type of effort toward a language-comeback. These efforts include everything from instituting apprenticeship programs, which pair a fluent elder with a student, to using, what may seem like an unusual tool because of the inherent reservations in Native American communities to being photographed or recorded in any form, technology such as YouTube videos of native speakers or Google Hangout video chats for live, long-distance conversations. The idea is to engage the younger members of the tribe who, in their effort to fit in to the more popular culture, are quickly losing ties with their unique heritage.
Which of the following statements would the author most likely agree with?
Recent studies on Native American languages point towards an alarming problem in the United States today. Language loss, a global phenomenon, is accelerating among indigenous groups in the United States. A large majority of Native American vernaculars are spoken only by elders and the remainder are fast approaching that status, as growing numbers of children speak only English.
To many, the precedence of one “common” language seems like an achievement of globalisation and hence they argue that it would be wiser to spend resources on improving the English speaking skills of Native Americans rather than resuscitating fading tongues. However, no language is just a collection of words and, therefore, languages are not so simply substitutable for each other. Each language is a unique tool for analyzing and synthesizing the world and to lose such a tool is to forget a way of constructing reality, blotting out a perspective evolved over many generations. Native American languages express ideas on which Native American cultures are anchored; a native language does not just reflect a culture - in a functional sense it is the culture. These languages are based on entirely different histories, scientific and natural-world understandings, spiritual beliefs, and political and legal ideas. They capture concepts that do not exist in English. In essence, they are based on different realities.
Realising the magnitude of this language-loss, most indigenous tribes today are making some type of effort toward a language-comeback. These efforts include everything from instituting apprenticeship programs, which pair a fluent elder with a student, to using, what may seem like an unusual tool because of the inherent reservations in Native American communities to being photographed or recorded in any form, technology such as YouTube videos of native speakers or Google Hangout video chats for live, long-distance conversations. The idea is to engage the younger members of the tribe who, in their effort to fit in to the more popular culture, are quickly losing ties with their unique heritage.
The author is primarily concerned with
1 Crore+ students have signed up on EduRev. Have you? Download the App |
Though the truism about Inuits having a hundred words for snow is an exaggeration, languages really are full of charming quirks that reveal the character of a culture. Dialects of Scottish Gaelic, for instance, traditionally spoken in the Highlands and, later on, in fishing villages, have a great many very specific words for seaweed, as well as names for each of the components of a rabbit snare and a word for an egg that emerges from a hen sans shell. Unfortunately for those who find these details fascinating, languages are going extinct at an incredible clip, - one dies every 14 days - and linguists are rushing around with tape recorders and word lists, trying to record at least a fragment of each before they go. The only way the old tongues will stick around is if populations themselves decide that there is something of value in them, whether for reasons of patriotism, cultural heritage, or just to lure in some language-curious tourists. But even when the general public opinion is for preservation of their linguistic diversity, linguists are finding it increasingly difficult to achieve such a task.
Mathematicians can help linguists out in this mission. To provide a test environment for programs that encourage the learning of endangered local languages, Anne Kandler and her colleagues decided to make a mathematical model of the speakers of Scottish Gaelic. This was an apposite choice because the local population was already becoming increasingly conscious about the cultural value of their language and statistics of the Gaelic speakers was readily available. The model the mathematicians built not only uses statistics such as the number of people speaking the languages, the number of polyglots and rate of change in these numbers but also figures which represent the economic value of the language and the perceived cultural value amongst people. These numbers were substituted in the differential equations of the model to find out the number of new Gaelic speakers required annually to stop the dwindling of the Gaelic population. The estimate of the number determined by Kandler’s research helped the national Gaelic Development Agency to formulate an effective plan towards the preserving the language.
Many languages such as Quechua, Chinook and Istrian Vlashki can be saved using such mathematical models. Results from mathematical equations can be useful in strategically planning preservation strategies. Similarly mathematical analysis of languages which have survived against many odds can also provide useful insights which can be applied towards saving other endangered languages.
The passage is primarily concerned with which of the following?
Though the truism about Inuits having a hundred words for snow is an exaggeration, languages really are full of charming quirks that reveal the character of a culture. Dialects of Scottish Gaelic, for instance, traditionally spoken in the Highlands and, later on, in fishing villages, have a great many very specific words for seaweed, as well as names for each of the components of a rabbit snare and a word for an egg that emerges from a hen sans shell. Unfortunately for those who find these details fascinating, languages are going extinct at an incredible clip, - one dies every 14 days - and linguists are rushing around with tape recorders and word lists, trying to record at least a fragment of each before they go. The only way the old tongues will stick around is if populations themselves decide that there is something of value in them, whether for reasons of patriotism, cultural heritage, or just to lure in some language-curious tourists. But even when the general public opinion is for preservation of their linguistic diversity, linguists are finding it increasingly difficult to achieve such a task.
Mathematicians can help linguists out in this mission. To provide a test environment for programs that encourage the learning of endangered local languages, Anne Kandler and her colleagues decided to make a mathematical model of the speakers of Scottish Gaelic. This was an apposite choice because the local population was already becoming increasingly conscious about the cultural value of their language and statistics of the Gaelic speakers was readily available. The model the mathematicians built not only uses statistics such as the number of people speaking the languages, the number of polyglots and rate of change in these numbers but also figures which represent the economic value of the language and the perceived cultural value amongst people. These numbers were substituted in the differential equations of the model to find out the number of new Gaelic speakers required annually to stop the dwindling of the Gaelic population. The estimate of the number determined by Kandler’s research helped the national Gaelic Development Agency to formulate an effective plan towards the preserving the language.
Many languages such as Quechua, Chinook and Istrian Vlashki can be saved using such mathematical models. Results from mathematical equations can be useful in strategically planning preservation strategies. Similarly mathematical analysis of languages which have survived against many odds can also provide useful insights which can be applied towards saving other endangered languages.
Which of the following best describes the relation of the first paragraph to the passage as a whole?
Though the truism about Inuits having a hundred words for snow is an exaggeration, languages really are full of charming quirks that reveal the character of a culture. Dialects of Scottish Gaelic, for instance, traditionally spoken in the Highlands and, later on, in fishing villages, have a great many very specific words for seaweed, as well as names for each of the components of a rabbit snare and a word for an egg that emerges from a hen sans shell. Unfortunately for those who find these details fascinating, languages are going extinct at an incredible clip, - one dies every 14 days - and linguists are rushing around with tape recorders and word lists, trying to record at least a fragment of each before they go. The only way the old tongues will stick around is if populations themselves decide that there is something of value in them, whether for reasons of patriotism, cultural heritage, or just to lure in some language-curious tourists. But even when the general public opinion is for preservation of their linguistic diversity, linguists are finding it increasingly difficult to achieve such a task.
Mathematicians can help linguists out in this mission. To provide a test environment for programs that encourage the learning of endangered local languages, Anne Kandler and her colleagues decided to make a mathematical model of the speakers of Scottish Gaelic. This was an apposite choice because the local population was already becoming increasingly conscious about the cultural value of their language and statistics of the Gaelic speakers was readily available. The model the mathematicians built not only uses statistics such as the number of people speaking the languages, the number of polyglots and rate of change in these numbers but also figures which represent the economic value of the language and the perceived cultural value amongst people. These numbers were substituted in the differential equations of the model to find out the number of new Gaelic speakers required annually to stop the dwindling of the Gaelic population. The estimate of the number determined by Kandler’s research helped the national Gaelic Development Agency to formulate an effective plan towards the preserving the language.
Many languages such as Quechua, Chinook and Istrian Vlashki can be saved using such mathematical models. Results from mathematical equations can be useful in strategically planning preservation strategies. Similarly mathematical analysis of languages which have survived against many odds can also provide useful insights which can be applied towards saving other endangered languages.
The Author’s conclusion that ‘languages such as Quechua, Chinook, and Istrian Vlashki can be saved using such mathematical models’ (beginning of last para.) is most weakened if which of the following is found to be true?
The role of nurturing in determining one’s behavioral traits has been hotly contested. Historically, geneticists believed that behavioral traits are inherited. After all, many properties of the brain are genetically organized and don't depend on information coming in from the senses. Since active genes are essentially inherited, most traditional geneticists believe that nurturing environment plays little role in shaping one’s behavioral traits.
However, a new line of research indicated that methyl groups can activate dormant genes, bringing about a slew of changes much later in a person’s life. The methyl group works like a placeholder in a cookbook, attaching to the DNA within each cell to select only those recipes - er, genes - necessary for that particular cell’s proteins, telling the DNA what kind of cells to form. The first such observation was in which methyl groups activated by causes ranging from exposure to certain chemicals to changes in diet set off a cascade of cellular changes resulting in cancer. Because methyl groups are attached to the genes, residing beside but separate from the double-helix DNA code, their study is dubbed epigenetics - “epi” referring to Greek for outer or above.
Behavioral geneticists, encouraged by this discovery proved that traumatic experiences such as child neglect, drug abuse, or other severe stresses also set off epigenetic changes to the DNA inside the neurons of a person’s brain, permanently altering behavior. Similarly, through multivariate analysis, they proved that identical twins, in scenarios where one twin has gone through a life altering event, can have vastly different reaction to a stressful situation.
The primary purpose of the passage is to
The role of nurturing in determining one’s behavioral traits has been hotly contested. Historically, geneticists believed that behavioral traits are inherited. After all, many properties of the brain are genetically organized and don't depend on information coming in from the senses. Since active genes are essentially inherited, most traditional geneticists believe that nurturing environment plays little role in shaping one’s behavioral traits.
However, a new line of research indicated that methyl groups can activate dormant genes, bringing about a slew of changes much later in a person’s life. The methyl group works like a placeholder in a cookbook, attaching to the DNA within each cell to select only those recipes - er, genes - necessary for that particular cell’s proteins, telling the DNA what kind of cells to form. The first such observation was in which methyl groups activated by causes ranging from exposure to certain chemicals to changes in diet set off a cascade of cellular changes resulting in cancer. Because methyl groups are attached to the genes, residing beside but separate from the double-helix DNA code, their study is dubbed epigenetics - “epi” referring to Greek for outer or above.
Behavioral geneticists, encouraged by this discovery proved that traumatic experiences such as child neglect, drug abuse, or other severe stresses also set off epigenetic changes to the DNA inside the neurons of a person’s brain, permanently altering behavior. Similarly, through multivariate analysis, they proved that identical twins, in scenarios where one twin has gone through a life altering event, can have vastly different reaction to a stressful situation.
Why does the author state “After all, many properties of the brain are genetically organized, and don't depend on information coming in from the senses”:
The role of nurturing in determining one’s behavioral traits has been hotly contested. Historically, geneticists believed that behavioral traits are inherited. After all, many properties of the brain are genetically organized and don't depend on information coming in from the senses. Since active genes are essentially inherited, most traditional geneticists believe that nurturing environment plays little role in shaping one’s behavioral traits.
However, a new line of research indicated that methyl groups can activate dormant genes, bringing about a slew of changes much later in a person’s life. The methyl group works like a placeholder in a cookbook, attaching to the DNA within each cell to select only those recipes - er, genes - necessary for that particular cell’s proteins, telling the DNA what kind of cells to form. The first such observation was in which methyl groups activated by causes ranging from exposure to certain chemicals to changes in diet set off a cascade of cellular changes resulting in cancer. Because methyl groups are attached to the genes, residing beside but separate from the double-helix DNA code, their study is dubbed epigenetics - “epi” referring to Greek for outer or above.
Behavioral geneticists, encouraged by this discovery proved that traumatic experiences such as child neglect, drug abuse, or other severe stresses also set off epigenetic changes to the DNA inside the neurons of a person’s brain, permanently altering behavior. Similarly, through multivariate analysis, they proved that identical twins, in scenarios where one twin has gone through a life altering event, can have vastly different reaction to a stressful situation.
Which of the following may be inferred from the passage?
The role of nurturing in determining one’s behavioral traits has been hotly contested. Historically, geneticists believed that behavioral traits are inherited. After all, many properties of the brain are genetically organized and don't depend on information coming in from the senses. Since active genes are essentially inherited, most traditional geneticists believe that nurturing environment plays little role in shaping one’s behavioral traits.
However, a new line of research indicated that methyl groups can activate dormant genes, bringing about a slew of changes much later in a person’s life. The methyl group works like a placeholder in a cookbook, attaching to the DNA within each cell to select only those recipes - er, genes - necessary for that particular cell’s proteins, telling the DNA what kind of cells to form. The first such observation was in which methyl groups activated by causes ranging from exposure to certain chemicals to changes in diet set off a cascade of cellular changes resulting in cancer. Because methyl groups are attached to the genes, residing beside but separate from the double-helix DNA code, their study is dubbed epigenetics - “epi” referring to Greek for outer or above.
Behavioral geneticists, encouraged by this discovery proved that traumatic experiences such as child neglect, drug abuse, or other severe stresses also set off epigenetic changes to the DNA inside the neurons of a person’s brain, permanently altering behavior. Similarly, through multivariate analysis, they proved that identical twins, in scenarios where one twin has gone through a life altering event, can have vastly different reaction to a stressful situation.
In the context of this passage, what is the importance of the example illustrating how cancer is caused?
Proverbial wisdom states that “birds of a feather flock together.” Studies have shown that people of similar geographical and educational backgrounds and functional experience are extremely likely to found companies together. Not considering spousal teams in the dataset, it has been found that a founding team is five times more likely to be all-male or all-female team. Also, it is more likely to find founding teams that are remarkably homogenous with regard to skills and functional backgrounds.
Homogeneity has important benefits. For the founder struggling to meet the challenges of a growing startup, selecting cofounders from among the people with whom he or she probably has important things in common is often the quickest and easiest solution. Not only does it generally take less time to find such people, but it also generally takes less time to develop effective working relationships with such similar people. When founders share a background, they share a common language that facilitates communication, ensuring that the team begins the work relationship with a mutual understanding and hence can skip over part of the learning curve that would absorb the energies of people with very different backgrounds. Increasing homogeneity may therefore be a particularly alluring- and, in some ways, a particularly sensible - approach for novice founders heading into unfamiliar territory. Certainly, studies have found that the greater the heterogeneity among executive team members, the greater the risk of interpersonal conflict and the lower the group-level integration. Even though it is very appealing to opt for the “comfortable” and “easy” decision to found with similar cofounders, by doing so founders may be causing long-term problems. Teams with a wide range of pertinent functional skills may be able to build more valuable and enduring startups. Conversely, homogenous teams tend to have overlapping human capital, making it more likely that the team will have redundant strengths and be missing critical skills.
From the passage, which of the following cannot be inferred as a benefit of homogenous teams?
Proverbial wisdom states that “birds of a feather flock together.” Studies have shown that people of similar geographical and educational backgrounds and functional experience are extremely likely to found companies together. Not considering spousal teams in the dataset, it has been found that a founding team is five times more likely to be all-male or all-female team. Also, it is more likely to find founding teams that are remarkably homogenous with regard to skills and functional backgrounds.
Homogeneity has important benefits. For the founder struggling to meet the challenges of a growing startup, selecting cofounders from among the people with whom he or she probably has important things in common is often the quickest and easiest solution. Not only does it generally take less time to find such people, but it also generally takes less time to develop effective working relationships with such similar people. When founders share a background, they share a common language that facilitates communication, ensuring that the team begins the work relationship with a mutual understanding and hence can skip over part of the learning curve that would absorb the energies of people with very different backgrounds. Increasing homogeneity may therefore be a particularly alluring- and, in some ways, a particularly sensible - approach for novice founders heading into unfamiliar territory. Certainly, studies have found that the greater the heterogeneity among executive team members, the greater the risk of interpersonal conflict and the lower the group-level integration. Even though it is very appealing to opt for the “comfortable” and “easy” decision to found with similar cofounders, by doing so founders may be causing long-term problems. Teams with a wide range of pertinent functional skills may be able to build more valuable and enduring startups. Conversely, homogenous teams tend to have overlapping human capital, making it more likely that the team will have redundant strengths and be missing critical skills.
Which of the following can be inferred about start-ups that comprise of homogenous teams?
Proverbial wisdom states that “birds of a feather flock together.” Studies have shown that people of similar geographical and educational backgrounds and functional experience are extremely likely to found companies together. Not considering spousal teams in the dataset, it has been found that a founding team is five times more likely to be all-male or all-female team. Also, it is more likely to find founding teams that are remarkably homogenous with regard to skills and functional backgrounds.
Homogeneity has important benefits. For the founder struggling to meet the challenges of a growing startup, selecting cofounders from among the people with whom he or she probably has important things in common is often the quickest and easiest solution. Not only does it generally take less time to find such people, but it also generally takes less time to develop effective working relationships with such similar people. When founders share a background, they share a common language that facilitates communication, ensuring that the team begins the work relationship with a mutual understanding and hence can skip over part of the learning curve that would absorb the energies of people with very different backgrounds. Increasing homogeneity may therefore be a particularly alluring- and, in some ways, a particularly sensible - approach for novice founders heading into unfamiliar territory. Certainly, studies have found that the greater the heterogeneity among executive team members, the greater the risk of interpersonal conflict and the lower the group-level integration. Even though it is very appealing to opt for the “comfortable” and “easy” decision to found with similar cofounders, by doing so founders may be causing long-term problems. Teams with a wide range of pertinent functional skills may be able to build more valuable and enduring startups. Conversely, homogenous teams tend to have overlapping human capital, making it more likely that the team will have redundant strengths and be missing critical skills.
The author’s main purpose of writing the passage is to:
In the year 1885, the Eiffel firm, which was named after the French engineer and architect Gustave Eiffel and which had extensive experience in structural engineering, undertook a series of investigations of tall metallic piers based upon its recent experiences with several railway viaducts and bridges. The most spectacular of these was the famous Garabit Viaduct, which carries a railroad some 400 feet above the valley of the Truyere in southern France. The design of this structure was the inspiration for the design of a 395-foot pier, which, although never incorporated into a bridge, is said to have been the direct basis for the Eiffel Tower. Preliminary studies for a 300-meter tower were made with the intention of showcasing it in the 1889 fair called Exposition Universelle. With an assurance born of positive knowledge, Eiffel in June of 1886 approached the Exposition commissioners with the project. There can be no doubt that only the singular respect with which Eiffel was regarded not only by his profession but by the entire nation motivated the Commission to approve a plan which, in the hands of a figure of less stature, would have been considered grossly impractical.
Between this time and the commencement of the Tower’s construction at the end of January 1887, there arose one of the most persistently annoying of the numerous difficulties, both structural and social, which confronted Eiffel as the project advanced. In the wake of the initial enthusiasm—on the part of the fair’s Commission that was inspired by the desire to create a monument to highlight French technological achievement, and on the part of the majority of French people by the stirring of their imagination at the magnitude of the structure—there grew a rising movement of disfavor. At the center of this movement was, not surprisingly, the intelligentsia, but objections were made by prominent French people from all walks of life.
The most interesting point to be noted in a retrospection of this often violent opposition is that, although every aspect of the Tower was attacked, there was remarkably little criticism of its structural feasibility, either by the engineering profession or, as seems traditionally to be the case with bold and unprecedented undertakings, by large numbers of the technically uninformed population. True, there was an undercurrent of what might be characterized as unease by many property owners in the structure’s shadow, but the most obstinate element of resistance was that which deplored the Tower as a mechanistic intrusion upon the architectural and natural beauties of Paris. This resistance voiced its fury in a flood of special newspaper editions, petitions, and manifestos signed by such lights of the fine and literary arts as De Maupassant, Gounod, Dumas fils, and others.
Based on the discussion of public opinion regarding the Eiffel Tower's construction it can be inferred that
In the year 1885, the Eiffel firm, which was named after the French engineer and architect Gustave Eiffel and which had extensive experience in structural engineering, undertook a series of investigations of tall metallic piers based upon its recent experiences with several railway viaducts and bridges. The most spectacular of these was the famous Garabit Viaduct, which carries a railroad some 400 feet above the valley of the Truyere in southern France. The design of this structure was the inspiration for the design of a 395-foot pier, which, although never incorporated into a bridge, is said to have been the direct basis for the Eiffel Tower. Preliminary studies for a 300-meter tower were made with the intention of showcasing it in the 1889 fair called Exposition Universelle. With an assurance born of positive knowledge, Eiffel in June of 1886 approached the Exposition commissioners with the project. There can be no doubt that only the singular respect with which Eiffel was regarded not only by his profession but by the entire nation motivated the Commission to approve a plan which, in the hands of a figure of less stature, would have been considered grossly impractical.
Between this time and the commencement of the Tower’s construction at the end of January 1887, there arose one of the most persistently annoying of the numerous difficulties, both structural and social, which confronted Eiffel as the project advanced. In the wake of the initial enthusiasm—on the part of the fair’s Commission that was inspired by the desire to create a monument to highlight French technological achievement, and on the part of the majority of French people by the stirring of their imagination at the magnitude of the structure—there grew a rising movement of disfavor. At the center of this movement was, not surprisingly, the intelligentsia, but objections were made by prominent French people from all walks of life.
The most interesting point to be noted in a retrospection of this often violent opposition is that, although every aspect of the Tower was attacked, there was remarkably little criticism of its structural feasibility, either by the engineering profession or, as seems traditionally to be the case with bold and unprecedented undertakings, by large numbers of the technically uninformed population. True, there was an undercurrent of what might be characterized as unease by many property owners in the structure’s shadow, but the most obstinate element of resistance was that which deplored the Tower as a mechanistic intrusion upon the architectural and natural beauties of Paris. This resistance voiced its fury in a flood of special newspaper editions, petitions, and manifestos signed by such lights of the fine and literary arts as De Maupassant, Gounod, Dumas fils, and others.
Which faction does the author refer to when he mentions “undercurrent” in the last paragraph?
In the year 1885, the Eiffel firm, which was named after the French engineer and architect Gustave Eiffel and which had extensive experience in structural engineering, undertook a series of investigations of tall metallic piers based upon its recent experiences with several railway viaducts and bridges. The most spectacular of these was the famous Garabit Viaduct, which carries a railroad some 400 feet above the valley of the Truyere in southern France. The design of this structure was the inspiration for the design of a 395-foot pier, which, although never incorporated into a bridge, is said to have been the direct basis for the Eiffel Tower. Preliminary studies for a 300-meter tower were made with the intention of showcasing it in the 1889 fair called Exposition Universelle. With an assurance born of positive knowledge, Eiffel in June of 1886 approached the Exposition commissioners with the project. There can be no doubt that only the singular respect with which Eiffel was regarded not only by his profession but by the entire nation motivated the Commission to approve a plan which, in the hands of a figure of less stature, would have been considered grossly impractical.
Between this time and the commencement of the Tower’s construction at the end of January 1887, there arose one of the most persistently annoying of the numerous difficulties, both structural and social, which confronted Eiffel as the project advanced. In the wake of the initial enthusiasm—on the part of the fair’s Commission that was inspired by the desire to create a monument to highlight French technological achievement, and on the part of the majority of French people by the stirring of their imagination at the magnitude of the structure—there grew a rising movement of disfavor. At the center of this movement was, not surprisingly, the intelligentsia, but objections were made by prominent French people from all walks of life.
The most interesting point to be noted in a retrospection of this often violent opposition is that, although every aspect of the Tower was attacked, there was remarkably little criticism of its structural feasibility, either by the engineering profession or, as seems traditionally to be the case with bold and unprecedented undertakings, by large numbers of the technically uninformed population. True, there was an undercurrent of what might be characterized as unease by many property owners in the structure’s shadow, but the most obstinate element of resistance was that which deplored the Tower as a mechanistic intrusion upon the architectural and natural beauties of Paris. This resistance voiced its fury in a flood of special newspaper editions, petitions, and manifestos signed by such lights of the fine and literary arts as De Maupassant, Gounod, Dumas fils, and others.
De Maupassant, Gounod, Dumas fils are mentioned by the passage in order to
In the year 1885, the Eiffel firm, which was named after the French engineer and architect Gustave Eiffel and which had extensive experience in structural engineering, undertook a series of investigations of tall metallic piers based upon its recent experiences with several railway viaducts and bridges. The most spectacular of these was the famous Garabit Viaduct, which carries a railroad some 400 feet above the valley of the Truyere in southern France. The design of this structure was the inspiration for the design of a 395-foot pier, which, although never incorporated into a bridge, is said to have been the direct basis for the Eiffel Tower. Preliminary studies for a 300-meter tower were made with the intention of showcasing it in the 1889 fair called Exposition Universelle. With an assurance born of positive knowledge, Eiffel in June of 1886 approached the Exposition commissioners with the project. There can be no doubt that only the singular respect with which Eiffel was regarded not only by his profession but by the entire nation motivated the Commission to approve a plan which, in the hands of a figure of less stature, would have been considered grossly impractical.
Between this time and the commencement of the Tower’s construction at the end of January 1887, there arose one of the most persistently annoying of the numerous difficulties, both structural and social, which confronted Eiffel as the project advanced. In the wake of the initial enthusiasm—on the part of the fair’s Commission that was inspired by the desire to create a monument to highlight French technological achievement, and on the part of the majority of French people by the stirring of their imagination at the magnitude of the structure—there grew a rising movement of disfavor. At the center of this movement was, not surprisingly, the intelligentsia, but objections were made by prominent French people from all walks of life.
The most interesting point to be noted in a retrospection of this often violent opposition is that, although every aspect of the Tower was attacked, there was remarkably little criticism of its structural feasibility, either by the engineering profession or, as seems traditionally to be the case with bold and unprecedented undertakings, by large numbers of the technically uninformed population. True, there was an undercurrent of what might be characterized as unease by many property owners in the structure’s shadow, but the most obstinate element of resistance was that which deplored the Tower as a mechanistic intrusion upon the architectural and natural beauties of Paris. This resistance voiced its fury in a flood of special newspaper editions, petitions, and manifestos signed by such lights of the fine and literary arts as De Maupassant, Gounod, Dumas fils, and others.
Which of the following is the author’s primary purpose behind this passage?
Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion. If our brain keeps dwindling at this rate over the next 20,000 years, it will start to approach the size of the brain found in Homo erectus, a relative that lived half a million years ago and had a brain volume of only 1,100 cc.
Some believe the erosion of our gray matter means that modern humans are indeed getting dumber. A common measure of intelligence - the encephalization quotient or EQ, defined as the ratio of brain volume to body mass - has been found to be decreasing in the recent past. Recent studies of human fossils suggest the brain shrank more quickly than the body in near-modern times. More importantly, analysis of the genome casts doubt on the notion that modern humans are simply daintier but otherwise identical versions of our ancestors, right down to how we think and feel. Another study concluded that our present EQ is the same as that of the Cro-Magnons - our ancestors who lived 30,000 years ago in Europe and were known more for brawniness rather than brilliance.
On the other hand, other anthropologists such as Hawks believe that as the brain shrank, its wiring became more efficient, transforming us into quicker, more agile thinkers. They explain the shrinking by arguing that over the very period that the brain shrank, our DNA accumulated numerous adaptive mutations related to brain development and neurotransmitter systems—an indication that even as the organ got smaller, its inner workings changed.
This explanation may be plausible, considering that the brain is such a glutton for fuel that it globs up to 20% of all the calories. To optimize this, the evolution may be moving towards a more efficient smaller brain that yields the most intelligence for the least energy. A boom in the human population in the last 20,000 years ago greatly improved the odds of such a fortuitous development since the more the individuals, the bigger the gene pool, and the greater the chance for an unusual advantageous mutation to happen.
The man-made product that is closest to the brain, the microprocessor, has seen similar evolution. A microprocessor consists of transistors- the human equivalent of neuron that participates in decision making – connected with wires that act as messengers between neurons. The first microprocessors had extremely simple architectures and were not optimized for a certain set of tasks but were more general purpose. Consequently, a lot of the power they consumed was dissipated in internal wiring and not in decision making. With refinements, the architectures became more and more attuned to the tasks that the microprocessor most commonly needed to do. Consequently, for the same number of transistors the amount of wiring decreased by a factor of 3 while the microprocessor’s processing speed increased by a factor of 10. While active research is still to conclude whether the same holds true in case of the
brain, one can only hope that the results are along the lines of the microprocessor.
The passage suggests that the modern microprocessor is more efficient because:
Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion. If our brain keeps dwindling at this rate over the next 20,000 years, it will start to approach the size of the brain found in Homo erectus, a relative that lived half a million years ago and had a brain volume of only 1,100 cc.
Some believe the erosion of our gray matter means that modern humans are indeed getting dumber. A common measure of intelligence - the encephalization quotient or EQ, defined as the ratio of brain volume to body mass - has been found to be decreasing in the recent past. Recent studies of human fossils suggest the brain shrank more quickly than the body in near-modern times. More importantly, analysis of the genome casts doubt on the notion that modern humans are simply daintier but otherwise identical versions of our ancestors, right down to how we think and feel. Another study concluded that our present EQ is the same as that of the Cro-Magnons - our ancestors who lived 30,000 years ago in Europe and were known more for brawniness rather than brilliance.
On the other hand, other anthropologists such as Hawks believe that as the brain shrank, its wiring became more efficient, transforming us into quicker, more agile thinkers. They explain the shrinking by arguing that over the very period that the brain shrank, our DNA accumulated numerous adaptive mutations related to brain development and neurotransmitter systems—an indication that even as the organ got smaller, its inner workings changed.
This explanation may be plausible, considering that the brain is such a glutton for fuel that it globs up to 20% of all the calories. To optimize this, the evolution may be moving towards a more efficient smaller brain that yields the most intelligence for the least energy. A boom in the human population in the last 20,000 years ago greatly improved the odds of such a fortuitous development since the more the individuals, the bigger the gene pool, and the greater the chance for an unusual advantageous mutation to happen.
The man-made product that is closest to the brain, the microprocessor, has seen similar evolution. A microprocessor consists of transistors- the human equivalent of neuron that participates in decision making – connected with wires that act as messengers between neurons. The first microprocessors had extremely simple architectures and were not optimized for a certain set of tasks but were more general purpose. Consequently, a lot of the power they consumed was dissipated in internal wiring and not in decision making. With refinements, the architectures became more and more attuned to the tasks that the microprocessor most commonly needed to do. Consequently, for the same number of transistors the amount of wiring decreased by a factor of 3 while the microprocessor’s processing speed increased by a factor of 10. While active research is still to conclude whether the same holds true in case of the
brain, one can only hope that the results are along the lines of the microprocessor.
In paragraph 4 - lines 1 and 2, the author talks about the brain being a glutton for
fuel to:
Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion. If our brain keeps dwindling at this rate over the next 20,000 years, it will start to approach the size of the brain found in Homo erectus, a relative that lived half a million years ago and had a brain volume of only 1,100 cc.
Some believe the erosion of our gray matter means that modern humans are indeed getting dumber. A common measure of intelligence - the encephalization quotient or EQ, defined as the ratio of brain volume to body mass - has been found to be decreasing in the recent past. Recent studies of human fossils suggest the brain shrank more quickly than the body in near-modern times. More importantly, analysis of the genome casts doubt on the notion that modern humans are simply daintier but otherwise identical versions of our ancestors, right down to how we think and feel. Another study concluded that our present EQ is the same as that of the Cro-Magnons - our ancestors who lived 30,000 years ago in Europe and were known more for brawniness rather than brilliance.
On the other hand, other anthropologists such as Hawks believe that as the brain shrank, its wiring became more efficient, transforming us into quicker, more agile thinkers. They explain the shrinking by arguing that over the very period that the brain shrank, our DNA accumulated numerous adaptive mutations related to brain development and neurotransmitter systems—an indication that even as the organ got smaller, its inner workings changed.
This explanation may be plausible, considering that the brain is such a glutton for fuel that it globs up to 20% of all the calories. To optimize this, the evolution may be moving towards a more efficient smaller brain that yields the most intelligence for the least energy. A boom in the human population in the last 20,000 years ago greatly improved the odds of such a fortuitous development since the more the individuals, the bigger the gene pool, and the greater the chance for an unusual advantageous mutation to happen.
The man-made product that is closest to the brain, the microprocessor, has seen similar evolution. A microprocessor consists of transistors- the human equivalent of neuron that participates in decision making – connected with wires that act as messengers between neurons. The first microprocessors had extremely simple architectures and were not optimized for a certain set of tasks but were more general purpose. Consequently, a lot of the power they consumed was dissipated in internal wiring and not in decision making. With refinements, the architectures became more and more attuned to the tasks that the microprocessor most commonly needed to do. Consequently, for the same number of transistors the amount of wiring decreased by a factor of 3 while the microprocessor’s processing speed increased by a factor of 10. While active research is still to conclude whether the same holds true in case of the
brain, one can only hope that the results are along the lines of the microprocessor.
According to the passage, the relationship between encephalization quotient and brain volume is:
Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion. If our brain keeps dwindling at this rate over the next 20,000 years, it will start to approach the size of the brain found in Homo erectus, a relative that lived half a million years ago and had a brain volume of only 1,100 cc.
Some believe the erosion of our gray matter means that modern humans are indeed getting dumber. A common measure of intelligence - the encephalization quotient or EQ, defined as the ratio of brain volume to body mass - has been found to be decreasing in the recent past. Recent studies of human fossils suggest the brain shrank more quickly than the body in near-modern times. More importantly, analysis of the genome casts doubt on the notion that modern humans are simply daintier but otherwise identical versions of our ancestors, right down to how we think and feel. Another study concluded that our present EQ is the same as that of the Cro-Magnons - our ancestors who lived 30,000 years ago in Europe and were known more for brawniness rather than brilliance.
On the other hand, other anthropologists such as Hawks believe that as the brain shrank, its wiring became more efficient, transforming us into quicker, more agile thinkers. They explain the shrinking by arguing that over the very period that the brain shrank, our DNA accumulated numerous adaptive mutations related to brain development and neurotransmitter systems—an indication that even as the organ got smaller, its inner workings changed.
This explanation may be plausible, considering that the brain is such a glutton for fuel that it globs up to 20% of all the calories. To optimize this, the evolution may be moving towards a more efficient smaller brain that yields the most intelligence for the least energy. A boom in the human population in the last 20,000 years ago greatly improved the odds of such a fortuitous development since the more the individuals, the bigger the gene pool, and the greater the chance for an unusual advantageous mutation to happen.
The man-made product that is closest to the brain, the microprocessor, has seen similar evolution. A microprocessor consists of transistors- the human equivalent of neuron that participates in decision making – connected with wires that act as messengers between neurons. The first microprocessors had extremely simple architectures and were not optimized for a certain set of tasks but were more general purpose. Consequently, a lot of the power they consumed was dissipated in internal wiring and not in decision making. With refinements, the architectures became more and more attuned to the tasks that the microprocessor most commonly needed to do. Consequently, for the same number of transistors the amount of wiring decreased by a factor of 3 while the microprocessor’s processing speed increased by a factor of 10. While active research is still to conclude whether the same holds true in case of the
brain, one can only hope that the results are along the lines of the microprocessor.
Which of the following if true would weaken the assertion that humans are getting dumber with the erosion of brain volume?
52 videos|54 docs|61 tests
|