Directions: Read the following passage and answer the questions that follow:
On 19 November 1990, Boris Yeltsin gave a speech in Kyiv to announce that, after more than 300 years of rule by the Russian tsars and the Soviet ‘totalitarian regime’ in Moscow, Ukraine was free at last. Russia, he said, did not want any special role in dictating Ukraine’s future, nor did it aim to be at the centre of any future empire. Five months earlier, in June 1990, inspired by independence movements in the Baltics and the Caucasus, Yeltsin had passed a declaration of Russian sovereignty that served as a model for those of several other Soviet republics, including Ukraine. While they stopped short of demanding full separation, such statements asserted that the USSR would have only as much power as its republics were willing to give.
Russian imperial ambitions can appear to be age-old and constant. Even relatively sophisticated media often present a Kremlin drive to dominate its neighbours that seems to have passed from the tsars to Stalin, and from Stalin to Putin. So it is worth remembering that, not long ago, Russia turned away from empire. In fact, in 1990-91, it was Russian secessionism – together with separatist movements in the republics – that brought down the USSR. To defeat the Soviet leader Mikhail Gorbachev’s attempt at preserving the union, Yeltsin fused the concerns of Russia’s liberal democrats and conservative nationalists into an awkward alliance. Like Donald Trump’s Make America Great Again or Boris Johnson’s Brexit, Yeltsin insisted that Russians, the Soviet Union’s dominant group, were oppressed. He called for separation from burdensome others to bring Russian renewal.
The roots of nationalist discontent lay in Russia’s peculiar status within the Soviet Union. After the Bolsheviks took control over much of the tsarist empire’s former territory, Lenin declared ‘war to the death on Great Russian chauvinism’ and proposed to uplift the ‘oppressed nations’ on its peripheries. To combat imperial inequality, Lenin called for unity, creating a federation of republics divided by nationality. The republics forfeited political sovereignty in exchange for territorial integrity, educational and cultural institutions in their own languages, and the elevation of the local ‘titular’ nationality into positions of power. Soviet policy, following Lenin, conceived of the republics as homelands for their respective nationalities (with autonomous regions and districts for smaller nationalities nested within them). The exception was the Russian Soviet Federative Socialist Republic, or RSFSR, which remained an administrative territory not associated with any ethnic or historic ‘Russia’.
Russia was the only Soviet republic that did not have its own Communist Party, capital, or Academy of Sciences. These omissions contributed to the uneasy overlap of ‘Russian’ and ‘Soviet’.
Q. Which one of the following is not a valid inference from the passage?
Directions: Read the following passage and answer the questions that follow:
On 19 November 1990, Boris Yeltsin gave a speech in Kyiv to announce that, after more than 300 years of rule by the Russian tsars and the Soviet ‘totalitarian regime’ in Moscow, Ukraine was free at last. Russia, he said, did not want any special role in dictating Ukraine’s future, nor did it aim to be at the centre of any future empire. Five months earlier, in June 1990, inspired by independence movements in the Baltics and the Caucasus, Yeltsin had passed a declaration of Russian sovereignty that served as a model for those of several other Soviet republics, including Ukraine. While they stopped short of demanding full separation, such statements asserted that the USSR would have only as much power as its republics were willing to give.
Russian imperial ambitions can appear to be age-old and constant. Even relatively sophisticated media often present a Kremlin drive to dominate its neighbours that seems to have passed from the tsars to Stalin, and from Stalin to Putin. So it is worth remembering that, not long ago, Russia turned away from empire. In fact, in 1990-91, it was Russian secessionism – together with separatist movements in the republics – that brought down the USSR. To defeat the Soviet leader Mikhail Gorbachev’s attempt at preserving the union, Yeltsin fused the concerns of Russia’s liberal democrats and conservative nationalists into an awkward alliance. Like Donald Trump’s Make America Great Again or Boris Johnson’s Brexit, Yeltsin insisted that Russians, the Soviet Union’s dominant group, were oppressed. He called for separation from burdensome others to bring Russian renewal.
The roots of nationalist discontent lay in Russia’s peculiar status within the Soviet Union. After the Bolsheviks took control over much of the tsarist empire’s former territory, Lenin declared ‘war to the death on Great Russian chauvinism’ and proposed to uplift the ‘oppressed nations’ on its peripheries. To combat imperial inequality, Lenin called for unity, creating a federation of republics divided by nationality. The republics forfeited political sovereignty in exchange for territorial integrity, educational and cultural institutions in their own languages, and the elevation of the local ‘titular’ nationality into positions of power. Soviet policy, following Lenin, conceived of the republics as homelands for their respective nationalities (with autonomous regions and districts for smaller nationalities nested within them). The exception was the Russian Soviet Federative Socialist Republic, or RSFSR, which remained an administrative territory not associated with any ethnic or historic ‘Russia’.
Russia was the only Soviet republic that did not have its own Communist Party, capital, or Academy of Sciences. These omissions contributed to the uneasy overlap of ‘Russian’ and ‘Soviet’.
Q. Which one of the following, if true, would not undermine the effectiveness of Lenin's approach to combat imperial inequality?
1 Crore+ students have signed up on EduRev. Have you? Download the App |
Directions: Read the following passage and answer the questions that follow:
On 19 November 1990, Boris Yeltsin gave a speech in Kyiv to announce that, after more than 300 years of rule by the Russian tsars and the Soviet ‘totalitarian regime’ in Moscow, Ukraine was free at last. Russia, he said, did not want any special role in dictating Ukraine’s future, nor did it aim to be at the centre of any future empire. Five months earlier, in June 1990, inspired by independence movements in the Baltics and the Caucasus, Yeltsin had passed a declaration of Russian sovereignty that served as a model for those of several other Soviet republics, including Ukraine. While they stopped short of demanding full separation, such statements asserted that the USSR would have only as much power as its republics were willing to give.
Russian imperial ambitions can appear to be age-old and constant. Even relatively sophisticated media often present a Kremlin drive to dominate its neighbours that seems to have passed from the tsars to Stalin, and from Stalin to Putin. So it is worth remembering that, not long ago, Russia turned away from empire. In fact, in 1990-91, it was Russian secessionism – together with separatist movements in the republics – that brought down the USSR. To defeat the Soviet leader Mikhail Gorbachev’s attempt at preserving the union, Yeltsin fused the concerns of Russia’s liberal democrats and conservative nationalists into an awkward alliance. Like Donald Trump’s Make America Great Again or Boris Johnson’s Brexit, Yeltsin insisted that Russians, the Soviet Union’s dominant group, were oppressed. He called for separation from burdensome others to bring Russian renewal.
The roots of nationalist discontent lay in Russia’s peculiar status within the Soviet Union. After the Bolsheviks took control over much of the tsarist empire’s former territory, Lenin declared ‘war to the death on Great Russian chauvinism’ and proposed to uplift the ‘oppressed nations’ on its peripheries. To combat imperial inequality, Lenin called for unity, creating a federation of republics divided by nationality. The republics forfeited political sovereignty in exchange for territorial integrity, educational and cultural institutions in their own languages, and the elevation of the local ‘titular’ nationality into positions of power. Soviet policy, following Lenin, conceived of the republics as homelands for their respective nationalities (with autonomous regions and districts for smaller nationalities nested within them). The exception was the Russian Soviet Federative Socialist Republic, or RSFSR, which remained an administrative territory not associated with any ethnic or historic ‘Russia’.
Russia was the only Soviet republic that did not have its own Communist Party, capital, or Academy of Sciences. These omissions contributed to the uneasy overlap of ‘Russian’ and ‘Soviet’.
Q. The author would support none of the following statements about the Soviet Union and its dissolution EXCEPT that:
Directions: Read the following passage and answer the questions that follow:
On 19 November 1990, Boris Yeltsin gave a speech in Kyiv to announce that, after more than 300 years of rule by the Russian tsars and the Soviet ‘totalitarian regime’ in Moscow, Ukraine was free at last. Russia, he said, did not want any special role in dictating Ukraine’s future, nor did it aim to be at the centre of any future empire. Five months earlier, in June 1990, inspired by independence movements in the Baltics and the Caucasus, Yeltsin had passed a declaration of Russian sovereignty that served as a model for those of several other Soviet republics, including Ukraine. While they stopped short of demanding full separation, such statements asserted that the USSR would have only as much power as its republics were willing to give.
Russian imperial ambitions can appear to be age-old and constant. Even relatively sophisticated media often present a Kremlin drive to dominate its neighbours that seems to have passed from the tsars to Stalin, and from Stalin to Putin. So it is worth remembering that, not long ago, Russia turned away from empire. In fact, in 1990-91, it was Russian secessionism – together with separatist movements in the republics – that brought down the USSR. To defeat the Soviet leader Mikhail Gorbachev’s attempt at preserving the union, Yeltsin fused the concerns of Russia’s liberal democrats and conservative nationalists into an awkward alliance. Like Donald Trump’s Make America Great Again or Boris Johnson’s Brexit, Yeltsin insisted that Russians, the Soviet Union’s dominant group, were oppressed. He called for separation from burdensome others to bring Russian renewal.
The roots of nationalist discontent lay in Russia’s peculiar status within the Soviet Union. After the Bolsheviks took control over much of the tsarist empire’s former territory, Lenin declared ‘war to the death on Great Russian chauvinism’ and proposed to uplift the ‘oppressed nations’ on its peripheries. To combat imperial inequality, Lenin called for unity, creating a federation of republics divided by nationality. The republics forfeited political sovereignty in exchange for territorial integrity, educational and cultural institutions in their own languages, and the elevation of the local ‘titular’ nationality into positions of power. Soviet policy, following Lenin, conceived of the republics as homelands for their respective nationalities (with autonomous regions and districts for smaller nationalities nested within them). The exception was the Russian Soviet Federative Socialist Republic, or RSFSR, which remained an administrative territory not associated with any ethnic or historic ‘Russia’.
Russia was the only Soviet republic that did not have its own Communist Party, capital, or Academy of Sciences. These omissions contributed to the uneasy overlap of ‘Russian’ and ‘Soviet’.
Q. Had the passage continued, what would the author have logically discussed next?
1. The specific implications of Yeltsin's policies on post-Soviet Russia.
2. A comparison of Russian secessionism with other historical secessionist movements.
3. The role of the Communist Party in shaping Russian identity.
4. The impact of the uneasy overlap between 'Russian' and 'Soviet' on modern Russia's political landscape.
Directions: Read the following passage and answer the questions that follow:
Imagine a vast circular chamber, with walls covered in a towering painted map of planet Earth. Picture this hall ‘like a theater, except that the circles and galleries go right round through the space usually occupied by the stage’. Enormous rings of tiered seating circle its outer walls. Imagine that working in these seats are 64,000 ‘computers’ – humans doing calculations – each preparing a different weather forecast for their designated geography.
And in the middle of the hall, on a large pulpit at the top of a tall multistorey pillar, stands the ‘man in charge’, who coordinates the scattered weather calculations from his computers into a global forecast like a ‘conductor of an orchestra’. This ‘forecast factory’ was the dream of the 20th-century English mathematician and meteorologist Lewis Fry Richardson. Following hundreds of pages of equations, velocities and data in his prosaically titled book Weather Prediction by Numerical Process (1922), he asks the reader to indulge him: ‘After so much hard reasoning, may one play with a fantasy?’ For Richardson, one of the main limitations on weather forecasting was a lack of computational capacity. But through the fantasy he could ignore practical problems and bring an entire planet into focus.
His ‘factory’ saw once-scattered local observations merging into a coherent planetary system: calculable, predictable, overseen and singular. Richardson died in 1953, the year IBM released the first mass-produced electronic computer. Though his factory never materialized exactly as he imagined it, his dream of a calculable planet now seems prophetic. By the 1960s, numerical calculation of global weather conditions had become a standardized way of recording changes in the atmosphere. Clouds and numbers seemed to crowd the sky. Since the 1960s, the scope of what Richardson called weather prediction has expanded dramatically: climate models now stretch into the deep past and future, encompassing the entirety of the Earth system rather than just the atmosphere. What is startling about this is not that our technical abilities have exceeded Richardson’s wildest dreams but the unexpected repercussions of the modern ‘forecast factory’. The calculable, predictable, overseen and singular Earth has revealed not only aeons of global weather, but a new kind of planet – and, with it, a new mode of governance. The planet, I argue, has appeared as a new kind of political object. I’m not talking about the Sun-orbiting body of the Copernican revolution, or the body that the first astronauts looked back upon in the 1960s: Buckminster Fuller’s ‘Spaceship Earth’, or Carl Sagan’s ‘lonely speck’. Those are the planets of the past millennium. I’m talking about the ‘planet’ inside ‘planetary crisis’: a planet that emerges from the realization that anthropogenic impacts are not isolated to particular areas, but integrated parts of a complex web of intersecting processes that unfold over vastly disparate timescales and across different geographies. This is the planet of the Anthropocene, of our ‘planetary emergency’ as the UN secretary-general António Guterres called it in 2020. The so-called planetary turn marks a new way of thinking about our relationship to the environment. It also signals the emergence of a distinct governable object, which suggests that the prime political object of the 21st century is no longer the state, it’s the planet.
Q. In the context of the passage, all of the following statements are true EXCEPT:
Directions: Read the following passage and answer the questions that follow:
Imagine a vast circular chamber, with walls covered in a towering painted map of planet Earth. Picture this hall ‘like a theater, except that the circles and galleries go right round through the space usually occupied by the stage’. Enormous rings of tiered seating circle its outer walls. Imagine that working in these seats are 64,000 ‘computers’ – humans doing calculations – each preparing a different weather forecast for their designated geography.
And in the middle of the hall, on a large pulpit at the top of a tall multistorey pillar, stands the ‘man in charge’, who coordinates the scattered weather calculations from his computers into a global forecast like a ‘conductor of an orchestra’. This ‘forecast factory’ was the dream of the 20th-century English mathematician and meteorologist Lewis Fry Richardson. Following hundreds of pages of equations, velocities and data in his prosaically titled book Weather Prediction by Numerical Process (1922), he asks the reader to indulge him: ‘After so much hard reasoning, may one play with a fantasy?’ For Richardson, one of the main limitations on weather forecasting was a lack of computational capacity. But through the fantasy he could ignore practical problems and bring an entire planet into focus.
His ‘factory’ saw once-scattered local observations merging into a coherent planetary system: calculable, predictable, overseen and singular. Richardson died in 1953, the year IBM released the first mass-produced electronic computer. Though his factory never materialized exactly as he imagined it, his dream of a calculable planet now seems prophetic. By the 1960s, numerical calculation of global weather conditions had become a standardized way of recording changes in the atmosphere. Clouds and numbers seemed to crowd the sky. Since the 1960s, the scope of what Richardson called weather prediction has expanded dramatically: climate models now stretch into the deep past and future, encompassing the entirety of the Earth system rather than just the atmosphere. What is startling about this is not that our technical abilities have exceeded Richardson’s wildest dreams but the unexpected repercussions of the modern ‘forecast factory’. The calculable, predictable, overseen and singular Earth has revealed not only aeons of global weather, but a new kind of planet – and, with it, a new mode of governance. The planet, I argue, has appeared as a new kind of political object. I’m not talking about the Sun-orbiting body of the Copernican revolution, or the body that the first astronauts looked back upon in the 1960s: Buckminster Fuller’s ‘Spaceship Earth’, or Carl Sagan’s ‘lonely speck’. Those are the planets of the past millennium. I’m talking about the ‘planet’ inside ‘planetary crisis’: a planet that emerges from the realization that anthropogenic impacts are not isolated to particular areas, but integrated parts of a complex web of intersecting processes that unfold over vastly disparate timescales and across different geographies. This is the planet of the Anthropocene, of our ‘planetary emergency’ as the UN secretary-general António Guterres called it in 2020. The so-called planetary turn marks a new way of thinking about our relationship to the environment. It also signals the emergence of a distinct governable object, which suggests that the prime political object of the 21st century is no longer the state, it’s the planet.
Q. Which one of the following statements best reflects the main argument of the third paragraph of the passage?
Directions: Read the following passage and answer the questions that follow:
Imagine a vast circular chamber, with walls covered in a towering painted map of planet Earth. Picture this hall ‘like a theater, except that the circles and galleries go right round through the space usually occupied by the stage’. Enormous rings of tiered seating circle its outer walls. Imagine that working in these seats are 64,000 ‘computers’ – humans doing calculations – each preparing a different weather forecast for their designated geography.
And in the middle of the hall, on a large pulpit at the top of a tall multistorey pillar, stands the ‘man in charge’, who coordinates the scattered weather calculations from his computers into a global forecast like a ‘conductor of an orchestra’. This ‘forecast factory’ was the dream of the 20th-century English mathematician and meteorologist Lewis Fry Richardson. Following hundreds of pages of equations, velocities and data in his prosaically titled book Weather Prediction by Numerical Process (1922), he asks the reader to indulge him: ‘After so much hard reasoning, may one play with a fantasy?’ For Richardson, one of the main limitations on weather forecasting was a lack of computational capacity. But through the fantasy he could ignore practical problems and bring an entire planet into focus.
His ‘factory’ saw once-scattered local observations merging into a coherent planetary system: calculable, predictable, overseen and singular. Richardson died in 1953, the year IBM released the first mass-produced electronic computer. Though his factory never materialized exactly as he imagined it, his dream of a calculable planet now seems prophetic. By the 1960s, numerical calculation of global weather conditions had become a standardized way of recording changes in the atmosphere. Clouds and numbers seemed to crowd the sky. Since the 1960s, the scope of what Richardson called weather prediction has expanded dramatically: climate models now stretch into the deep past and future, encompassing the entirety of the Earth system rather than just the atmosphere. What is startling about this is not that our technical abilities have exceeded Richardson’s wildest dreams but the unexpected repercussions of the modern ‘forecast factory’. The calculable, predictable, overseen and singular Earth has revealed not only aeons of global weather, but a new kind of planet – and, with it, a new mode of governance. The planet, I argue, has appeared as a new kind of political object. I’m not talking about the Sun-orbiting body of the Copernican revolution, or the body that the first astronauts looked back upon in the 1960s: Buckminster Fuller’s ‘Spaceship Earth’, or Carl Sagan’s ‘lonely speck’. Those are the planets of the past millennium. I’m talking about the ‘planet’ inside ‘planetary crisis’: a planet that emerges from the realization that anthropogenic impacts are not isolated to particular areas, but integrated parts of a complex web of intersecting processes that unfold over vastly disparate timescales and across different geographies. This is the planet of the Anthropocene, of our ‘planetary emergency’ as the UN secretary-general António Guterres called it in 2020. The so-called planetary turn marks a new way of thinking about our relationship to the environment. It also signals the emergence of a distinct governable object, which suggests that the prime political object of the 21st century is no longer the state, it’s the planet.
Q. The author lists all of the following as reasons for the emergence of the new kind of planet EXCEPT:
Directions: Read the following passage and answer the questions that follow:
Imagine a vast circular chamber, with walls covered in a towering painted map of planet Earth. Picture this hall ‘like a theater, except that the circles and galleries go right round through the space usually occupied by the stage’. Enormous rings of tiered seating circle its outer walls. Imagine that working in these seats are 64,000 ‘computers’ – humans doing calculations – each preparing a different weather forecast for their designated geography.
And in the middle of the hall, on a large pulpit at the top of a tall multistorey pillar, stands the ‘man in charge’, who coordinates the scattered weather calculations from his computers into a global forecast like a ‘conductor of an orchestra’. This ‘forecast factory’ was the dream of the 20th-century English mathematician and meteorologist Lewis Fry Richardson. Following hundreds of pages of equations, velocities and data in his prosaically titled book Weather Prediction by Numerical Process (1922), he asks the reader to indulge him: ‘After so much hard reasoning, may one play with a fantasy?’ For Richardson, one of the main limitations on weather forecasting was a lack of computational capacity. But through the fantasy he could ignore practical problems and bring an entire planet into focus.
His ‘factory’ saw once-scattered local observations merging into a coherent planetary system: calculable, predictable, overseen and singular. Richardson died in 1953, the year IBM released the first mass-produced electronic computer. Though his factory never materialized exactly as he imagined it, his dream of a calculable planet now seems prophetic. By the 1960s, numerical calculation of global weather conditions had become a standardized way of recording changes in the atmosphere. Clouds and numbers seemed to crowd the sky. Since the 1960s, the scope of what Richardson called weather prediction has expanded dramatically: climate models now stretch into the deep past and future, encompassing the entirety of the Earth system rather than just the atmosphere. What is startling about this is not that our technical abilities have exceeded Richardson’s wildest dreams but the unexpected repercussions of the modern ‘forecast factory’. The calculable, predictable, overseen and singular Earth has revealed not only aeons of global weather, but a new kind of planet – and, with it, a new mode of governance. The planet, I argue, has appeared as a new kind of political object. I’m not talking about the Sun-orbiting body of the Copernican revolution, or the body that the first astronauts looked back upon in the 1960s: Buckminster Fuller’s ‘Spaceship Earth’, or Carl Sagan’s ‘lonely speck’. Those are the planets of the past millennium. I’m talking about the ‘planet’ inside ‘planetary crisis’: a planet that emerges from the realization that anthropogenic impacts are not isolated to particular areas, but integrated parts of a complex web of intersecting processes that unfold over vastly disparate timescales and across different geographies. This is the planet of the Anthropocene, of our ‘planetary emergency’ as the UN secretary-general António Guterres called it in 2020. The so-called planetary turn marks a new way of thinking about our relationship to the environment. It also signals the emergence of a distinct governable object, which suggests that the prime political object of the 21st century is no longer the state, it’s the planet.
Q. The central theme of the passage is about the choice between:
Direction: Read the following passage and answer the question that follows:
There are several key difficulties surrounding the topic of percentages. Research has shown that there has been one difficulty which is more common than others; the meaning of the terms ‘of’ and ‘out of’. Hansen (2011) states that both terms represent an operator which needs explaining. Teachers need to address these before the topic is introduced to stop any confusion. ‘Of’ represents the multiplication operator, for example: 60% of 70 means 0.6 multiplied by 70; ‘out of’ represents the division operator, for example 30 out of 50 means 30 divided by 50. The teaching of these terms needs to be clear prior to teaching, so that children are confident in what these terms represent.
Killen and Hindhaugh (2018) believe that once children understand that 1/10 is equal to 10% they will be able to use their knowledge of fractions to determine other multiples of 10. For example; Find 40% of 200. If children are aware that 10% is 20, then it will become obvious to them that 40% must be 80. This method enlightens many other practical ways to find other percentages of a quantity. Once children know 10%, they may also start finding half percent’s, such as; 5% or 25%. However, Killen and Hindhaugh (2018) state that a difficulty could occur when they are asking for a percentage of a quantity. If children are being asked to find the percentage, they may believe that the answer is always in percent. For example; find 60% of £480. Children may be capable of calculating the answer of 288 but instead of writing down £288, they may write down 288%. Teachers will need to explain this issue and address to children that once calculating the answer, it must be in the same units as the given quantity.
Hansen also comments that the key to succession in the understanding of percentages is the relationship and understanding the children have with fractions and decimals. For example: they should be aware that 50% is equivalent to ½ and 0.5, and 25% is equivalent to ¼ and 0.25. Teaching these topics in isolation of each other should be strictly avoided as this may destroy a child’s deep mathematical understanding. Killen and Hindhaugh agree with this as they noted that children need to continually link decimals, fractions and percentages to their knowledge of the number system and operations that they are familiar with. Reys, et al (2010) believes however that percentages are more closely linked with ratios and proportions in mathematics and how important it is for teachers to teach these other topics to a high level. This is to later reduce the amount of errors a child has over percentages. However, these theorists also agree that understanding percentages requires no more new skills or concepts beyond those used in identifying fractions, decimals, ratios and proportions. Reys, et al states that an effective way of starting these topics is to explore children’s basic knowledge of what percentage means to them.
Barmby et al noted that a misconception occurs whenever a learner’s outlook of a task does not connect to the accepted meaning of the overall concept. Ryan and Williams state that it is more damaging for children to have misconceptions of mathematical concepts than difficulties calculating them. Killen and Hindhaugh begin to talk how the use of rules and recipes are commonly used more so by teachers that are not fully confident with percentages. The main point of the argument is that if children are taught these rules linked to percentages, misconceptions can occur. This could be caused if the child forgets or misapplies the rule to their working out.
This method is not the most reliable for children but can be a quick alternative for teachers to teach their class, if they are not fully confident in the topic themselves. This links to one of the most common misconceptions in the primary classroom. Killen and Hindhaugh state that it is the teacher’s responsibility for their children’s successes in that subject area. If the teaching is effective, then the child will become more confident and develop more links revolving around the topic of percentages. This will result in the child having a high level of understanding. However, if the teaching is not up to standard the child may lose confidence in themselves and end up being confused with the simplest of questions.
Q. It can be inferred from the passage that the author is not likely to support the view that
Direction: Read the following passage and answer the question that follows:
There are several key difficulties surrounding the topic of percentages. Research has shown that there has been one difficulty which is more common than others; the meaning of the terms ‘of’ and ‘out of’. Hansen (2011) states that both terms represent an operator which needs explaining. Teachers need to address these before the topic is introduced to stop any confusion. ‘Of’ represents the multiplication operator, for example: 60% of 70 means 0.6 multiplied by 70; ‘out of’ represents the division operator, for example 30 out of 50 means 30 divided by 50. The teaching of these terms needs to be clear prior to teaching, so that children are confident in what these terms represent.
Killen and Hindhaugh (2018) believe that once children understand that 1/10 is equal to 10% they will be able to use their knowledge of fractions to determine other multiples of 10. For example; Find 40% of 200. If children are aware that 10% is 20, then it will become obvious to them that 40% must be 80. This method enlightens many other practical ways to find other percentages of a quantity. Once children know 10%, they may also start finding half percent’s, such as; 5% or 25%. However, Killen and Hindhaugh (2018) state that a difficulty could occur when they are asking for a percentage of a quantity. If children are being asked to find the percentage, they may believe that the answer is always in percent. For example; find 60% of £480. Children may be capable of calculating the answer of 288 but instead of writing down £288, they may write down 288%. Teachers will need to explain this issue and address to children that once calculating the answer, it must be in the same units as the given quantity.
Hansen also comments that the key to succession in the understanding of percentages is the relationship and understanding the children have with fractions and decimals. For example: they should be aware that 50% is equivalent to ½ and 0.5, and 25% is equivalent to ¼ and 0.25. Teaching these topics in isolation of each other should be strictly avoided as this may destroy a child’s deep mathematical understanding. Killen and Hindhaugh agree with this as they noted that children need to continually link decimals, fractions and percentages to their knowledge of the number system and operations that they are familiar with. Reys, et al (2010) believes however that percentages are more closely linked with ratios and proportions in mathematics and how important it is for teachers to teach these other topics to a high level. This is to later reduce the amount of errors a child has over percentages. However, these theorists also agree that understanding percentages requires no more new skills or concepts beyond those used in identifying fractions, decimals, ratios and proportions. Reys, et al states that an effective way of starting these topics is to explore children’s basic knowledge of what percentage means to them.
Barmby et al noted that a misconception occurs whenever a learner’s outlook of a task does not connect to the accepted meaning of the overall concept. Ryan and Williams state that it is more damaging for children to have misconceptions of mathematical concepts than difficulties calculating them. Killen and Hindhaugh begin to talk how the use of rules and recipes are commonly used more so by teachers that are not fully confident with percentages. The main point of the argument is that if children are taught these rules linked to percentages, misconceptions can occur. This could be caused if the child forgets or misapplies the rule to their working out.
This method is not the most reliable for children but can be a quick alternative for teachers to teach their class, if they are not fully confident in the topic themselves. This links to one of the most common misconceptions in the primary classroom. Killen and Hindhaugh state that it is the teacher’s responsibility for their children’s successes in that subject area. If the teaching is effective, then the child will become more confident and develop more links revolving around the topic of percentages. This will result in the child having a high level of understanding. However, if the teaching is not up to standard the child may lose confidence in themselves and end up being confused with the simplest of questions.
Q. Which one of the following is not a valid inference from the passage?
Direction: Read the following passage and answer the question that follows:
There are several key difficulties surrounding the topic of percentages. Research has shown that there has been one difficulty which is more common than others; the meaning of the terms ‘of’ and ‘out of’. Hansen (2011) states that both terms represent an operator which needs explaining. Teachers need to address these before the topic is introduced to stop any confusion. ‘Of’ represents the multiplication operator, for example: 60% of 70 means 0.6 multiplied by 70; ‘out of’ represents the division operator, for example 30 out of 50 means 30 divided by 50. The teaching of these terms needs to be clear prior to teaching, so that children are confident in what these terms represent.
Killen and Hindhaugh (2018) believe that once children understand that 1/10 is equal to 10% they will be able to use their knowledge of fractions to determine other multiples of 10. For example; Find 40% of 200. If children are aware that 10% is 20, then it will become obvious to them that 40% must be 80. This method enlightens many other practical ways to find other percentages of a quantity. Once children know 10%, they may also start finding half percent’s, such as; 5% or 25%. However, Killen and Hindhaugh (2018) state that a difficulty could occur when they are asking for a percentage of a quantity. If children are being asked to find the percentage, they may believe that the answer is always in percent. For example; find 60% of £480. Children may be capable of calculating the answer of 288 but instead of writing down £288, they may write down 288%. Teachers will need to explain this issue and address to children that once calculating the answer, it must be in the same units as the given quantity.
Hansen also comments that the key to succession in the understanding of percentages is the relationship and understanding the children have with fractions and decimals. For example: they should be aware that 50% is equivalent to ½ and 0.5, and 25% is equivalent to ¼ and 0.25. Teaching these topics in isolation of each other should be strictly avoided as this may destroy a child’s deep mathematical understanding. Killen and Hindhaugh agree with this as they noted that children need to continually link decimals, fractions and percentages to their knowledge of the number system and operations that they are familiar with. Reys, et al (2010) believes however that percentages are more closely linked with ratios and proportions in mathematics and how important it is for teachers to teach these other topics to a high level. This is to later reduce the amount of errors a child has over percentages. However, these theorists also agree that understanding percentages requires no more new skills or concepts beyond those used in identifying fractions, decimals, ratios and proportions. Reys, et al states that an effective way of starting these topics is to explore children’s basic knowledge of what percentage means to them.
Barmby et al noted that a misconception occurs whenever a learner’s outlook of a task does not connect to the accepted meaning of the overall concept. Ryan and Williams state that it is more damaging for children to have misconceptions of mathematical concepts than difficulties calculating them. Killen and Hindhaugh begin to talk how the use of rules and recipes are commonly used more so by teachers that are not fully confident with percentages. The main point of the argument is that if children are taught these rules linked to percentages, misconceptions can occur. This could be caused if the child forgets or misapplies the rule to their working out.
This method is not the most reliable for children but can be a quick alternative for teachers to teach their class, if they are not fully confident in the topic themselves. This links to one of the most common misconceptions in the primary classroom. Killen and Hindhaugh state that it is the teacher’s responsibility for their children’s successes in that subject area. If the teaching is effective, then the child will become more confident and develop more links revolving around the topic of percentages. This will result in the child having a high level of understanding. However, if the teaching is not up to standard the child may lose confidence in themselves and end up being confused with the simplest of questions.
Q. Which of the following statements best describes the relationship between percentages, fractions, decimals, ratios, and proportions according to the passage?
Direction: Read the following passage and answer the question that follows:
There are several key difficulties surrounding the topic of percentages. Research has shown that there has been one difficulty which is more common than others; the meaning of the terms ‘of’ and ‘out of’. Hansen (2011) states that both terms represent an operator which needs explaining. Teachers need to address these before the topic is introduced to stop any confusion. ‘Of’ represents the multiplication operator, for example: 60% of 70 means 0.6 multiplied by 70; ‘out of’ represents the division operator, for example 30 out of 50 means 30 divided by 50. The teaching of these terms needs to be clear prior to teaching, so that children are confident in what these terms represent.
Killen and Hindhaugh (2018) believe that once children understand that 1/10 is equal to 10% they will be able to use their knowledge of fractions to determine other multiples of 10. For example; Find 40% of 200. If children are aware that 10% is 20, then it will become obvious to them that 40% must be 80. This method enlightens many other practical ways to find other percentages of a quantity. Once children know 10%, they may also start finding half percent’s, such as; 5% or 25%. However, Killen and Hindhaugh (2018) state that a difficulty could occur when they are asking for a percentage of a quantity. If children are being asked to find the percentage, they may believe that the answer is always in percent. For example; find 60% of £480. Children may be capable of calculating the answer of 288 but instead of writing down £288, they may write down 288%. Teachers will need to explain this issue and address to children that once calculating the answer, it must be in the same units as the given quantity.
Hansen also comments that the key to succession in the understanding of percentages is the relationship and understanding the children have with fractions and decimals. For example: they should be aware that 50% is equivalent to ½ and 0.5, and 25% is equivalent to ¼ and 0.25. Teaching these topics in isolation of each other should be strictly avoided as this may destroy a child’s deep mathematical understanding. Killen and Hindhaugh agree with this as they noted that children need to continually link decimals, fractions and percentages to their knowledge of the number system and operations that they are familiar with. Reys, et al (2010) believes however that percentages are more closely linked with ratios and proportions in mathematics and how important it is for teachers to teach these other topics to a high level. This is to later reduce the amount of errors a child has over percentages. However, these theorists also agree that understanding percentages requires no more new skills or concepts beyond those used in identifying fractions, decimals, ratios and proportions. Reys, et al states that an effective way of starting these topics is to explore children’s basic knowledge of what percentage means to them.
Barmby et al noted that a misconception occurs whenever a learner’s outlook of a task does not connect to the accepted meaning of the overall concept. Ryan and Williams state that it is more damaging for children to have misconceptions of mathematical concepts than difficulties calculating them. Killen and Hindhaugh begin to talk how the use of rules and recipes are commonly used more so by teachers that are not fully confident with percentages. The main point of the argument is that if children are taught these rules linked to percentages, misconceptions can occur. This could be caused if the child forgets or misapplies the rule to their working out.
This method is not the most reliable for children but can be a quick alternative for teachers to teach their class, if they are not fully confident in the topic themselves. This links to one of the most common misconceptions in the primary classroom. Killen and Hindhaugh state that it is the teacher’s responsibility for their children’s successes in that subject area. If the teaching is effective, then the child will become more confident and develop more links revolving around the topic of percentages. This will result in the child having a high level of understanding. However, if the teaching is not up to standard the child may lose confidence in themselves and end up being confused with the simplest of questions.
Q. Which one of the following statements best reflects the main argument of the fourth paragraph of the passage?
Direction: Read the following passage and answer the question that follows:
There are several key difficulties surrounding the topic of percentages. Research has shown that there has been one difficulty which is more common than others; the meaning of the terms ‘of’ and ‘out of’. Hansen (2011) states that both terms represent an operator which needs explaining. Teachers need to address these before the topic is introduced to stop any confusion. ‘Of’ represents the multiplication operator, for example: 60% of 70 means 0.6 multiplied by 70; ‘out of’ represents the division operator, for example 30 out of 50 means 30 divided by 50. The teaching of these terms needs to be clear prior to teaching, so that children are confident in what these terms represent.
Killen and Hindhaugh (2018) believe that once children understand that 1/10 is equal to 10% they will be able to use their knowledge of fractions to determine other multiples of 10. For example; Find 40% of 200. If children are aware that 10% is 20, then it will become obvious to them that 40% must be 80. This method enlightens many other practical ways to find other percentages of a quantity. Once children know 10%, they may also start finding half percent’s, such as; 5% or 25%. However, Killen and Hindhaugh (2018) state that a difficulty could occur when they are asking for a percentage of a quantity. If children are being asked to find the percentage, they may believe that the answer is always in percent. For example; find 60% of £480. Children may be capable of calculating the answer of 288 but instead of writing down £288, they may write down 288%. Teachers will need to explain this issue and address to children that once calculating the answer, it must be in the same units as the given quantity.
Hansen also comments that the key to succession in the understanding of percentages is the relationship and understanding the children have with fractions and decimals. For example: they should be aware that 50% is equivalent to ½ and 0.5, and 25% is equivalent to ¼ and 0.25. Teaching these topics in isolation of each other should be strictly avoided as this may destroy a child’s deep mathematical understanding. Killen and Hindhaugh agree with this as they noted that children need to continually link decimals, fractions and percentages to their knowledge of the number system and operations that they are familiar with. Reys, et al (2010) believes however that percentages are more closely linked with ratios and proportions in mathematics and how important it is for teachers to teach these other topics to a high level. This is to later reduce the amount of errors a child has over percentages. However, these theorists also agree that understanding percentages requires no more new skills or concepts beyond those used in identifying fractions, decimals, ratios and proportions. Reys, et al states that an effective way of starting these topics is to explore children’s basic knowledge of what percentage means to them.
Barmby et al noted that a misconception occurs whenever a learner’s outlook of a task does not connect to the accepted meaning of the overall concept. Ryan and Williams state that it is more damaging for children to have misconceptions of mathematical concepts than difficulties calculating them. Killen and Hindhaugh begin to talk how the use of rules and recipes are commonly used more so by teachers that are not fully confident with percentages. The main point of the argument is that if children are taught these rules linked to percentages, misconceptions can occur. This could be caused if the child forgets or misapplies the rule to their working out.
This method is not the most reliable for children but can be a quick alternative for teachers to teach their class, if they are not fully confident in the topic themselves. This links to one of the most common misconceptions in the primary classroom. Killen and Hindhaugh state that it is the teacher’s responsibility for their children’s successes in that subject area. If the teaching is effective, then the child will become more confident and develop more links revolving around the topic of percentages. This will result in the child having a high level of understanding. However, if the teaching is not up to standard the child may lose confidence in themselves and end up being confused with the simplest of questions.
Q. On the basis of the information in the passage, all of the following are potential problems children might face when learning percentages EXCEPT that they:
Direction: Read the following passage and answer the question that follows:
In case that the pregnant woman is in early pregnancy or obese, she can undergo transvaginal sonography, which a probe is placed in the woman’s vagina. Sometimes the test is also carried out if the pregnant woman has got abnormal vaginal bleeding or pelvic pain. This type of sonography has the similar principle as the ultrasonography mentioned above. Some mothers may want to see the heartbeat of their babies, they can carry out the Doppler sonography. It has basically the same principle as the ultrasonography except the ultrasound is further enhanced by Doppler Effect. Generally the fetus’s heartbeat can be detected after 7 weeks of gestation, thus the blood flow of the fetus can be detected as well.
The blood flows in a circulation in the body of the fetus, the Doppler sonography can thus detect the change in directions of blood flow by Doppler effect and see if the circulation is normal or not. This can be done by measuring the change in the frequency received in the transceiver. In fact there are a few more types of prenatal checkup, such as amniocentesis and chronic villus sampling. Nonetheless, the ultrasonography is the safest way for diagnosis. The ultrasonography only involves a transducer placing outside the mother’s abdomen, while amniocentesis and chronic villus sampling require mechanical penetration and sampling inside the mother’s uterus or abdomen, this increases the risk of miscarriage during the tests.
Despite this fact, ultrasonography can only give an early diagnosis of the mothers and fetuses, it cannot treat anomalies or genetic diseases. According to the test conducted by RADIUS study group in 1993, researchers found that obtaining sonography has no significantly negative effect on reducing perinatal morbidity or mortality among the fetuses or the mothers. Moreover, the detection of anomalies actually did not alter the outcome of newborn babies. Therefore it is important to acknowledge that ultrasonography is just a test whether the fetuses are healthy, but not a treatment to anomalies. X-ray is an electromagnetic wave with a wavelength ranged from 0.01 to 10 nanometers (0.01 – 10 x 10 – 9m). It has a speed of 3 × 108 ms-1 in vacuum. In fact, X-ray is commonly used in medical treatments, such as radiation therapy of cancer and medical imaging technology.
X-ray is produced in an X-ray tube. In the X-ray tube, electrons are accelerated by applying a high voltage. Electrons then collide with a metal, and the sudden deceleration of electrons results in the emission of X-ray. X-ray has high ionizing power, thus there are many people worrying about the harmful effects of having an X-ray diagnosis, especially pregnant women. It is true that a very high dose of radiation from X-ray may result in radiation sickness. Prolonged and continuous exposure to X-ray also increases the risk of cancer development, and in pregnant women, there may also be a risk for the fetus to develop childhood cancer or even miscarriage. Nevertheless, it seems that the harmful effects of exposing to X-ray are exaggerated. The serious harmful effects mentioned above are just the results of high dosage in a short period of time.
There are different kinds of X-rays, one type is used in scanning or diagnosis, one type is used in treating cancer. The energy stored in different types of X-rays is different. For normal X-ray scanning, the dosage is extremely small. The absorbed dose of X-ray is measured in rad, which 1 rad = 10 × 10–3 J kg–1 = 10 – 2 J kg-1. If a pregnant woman is having a chest X-ray, the estimated fetal dose is around 60 millirads, the dose is around 290 millirads for an abdominal X-ray. This is quite a low value, as the dose from the radiation from outer space is around 90-100 millirads. In fact, the risk of the fetus having eye abnormalities or mental retardation increases only when the dosage exceeds 10 rads, therefore it is very rare that pregnant women suffer from harmful effects by the X-ray radiation. According to the American Academy of Family Physicians, generally X-rays are safe even for pregnant women, and according to radiologists, no single diagnostic x-ray has a radiation dose significant enough to cause adverse effects in a developing embryo or fetus.
Q. Based on the passage, what is the main idea conveyed about prenatal checkups and X-ray use for pregnant women?
Direction: Read the following passage and answer the question that follows:
In case that the pregnant woman is in early pregnancy or obese, she can undergo transvaginal sonography, which a probe is placed in the woman’s vagina. Sometimes the test is also carried out if the pregnant woman has got abnormal vaginal bleeding or pelvic pain. This type of sonography has the similar principle as the ultrasonography mentioned above. Some mothers may want to see the heartbeat of their babies, they can carry out the Doppler sonography. It has basically the same principle as the ultrasonography except the ultrasound is further enhanced by Doppler Effect. Generally the fetus’s heartbeat can be detected after 7 weeks of gestation, thus the blood flow of the fetus can be detected as well.
The blood flows in a circulation in the body of the fetus, the Doppler sonography can thus detect the change in directions of blood flow by Doppler effect and see if the circulation is normal or not. This can be done by measuring the change in the frequency received in the transceiver. In fact there are a few more types of prenatal checkup, such as amniocentesis and chronic villus sampling. Nonetheless, the ultrasonography is the safest way for diagnosis. The ultrasonography only involves a transducer placing outside the mother’s abdomen, while amniocentesis and chronic villus sampling require mechanical penetration and sampling inside the mother’s uterus or abdomen, this increases the risk of miscarriage during the tests.
Despite this fact, ultrasonography can only give an early diagnosis of the mothers and fetuses, it cannot treat anomalies or genetic diseases. According to the test conducted by RADIUS study group in 1993, researchers found that obtaining sonography has no significantly negative effect on reducing perinatal morbidity or mortality among the fetuses or the mothers. Moreover, the detection of anomalies actually did not alter the outcome of newborn babies. Therefore it is important to acknowledge that ultrasonography is just a test whether the fetuses are healthy, but not a treatment to anomalies. X-ray is an electromagnetic wave with a wavelength ranged from 0.01 to 10 nanometers (0.01 – 10 x 10 – 9m). It has a speed of 3 × 108 ms-1 in vacuum. In fact, X-ray is commonly used in medical treatments, such as radiation therapy of cancer and medical imaging technology.
X-ray is produced in an X-ray tube. In the X-ray tube, electrons are accelerated by applying a high voltage. Electrons then collide with a metal, and the sudden deceleration of electrons results in the emission of X-ray. X-ray has high ionizing power, thus there are many people worrying about the harmful effects of having an X-ray diagnosis, especially pregnant women. It is true that a very high dose of radiation from X-ray may result in radiation sickness. Prolonged and continuous exposure to X-ray also increases the risk of cancer development, and in pregnant women, there may also be a risk for the fetus to develop childhood cancer or even miscarriage. Nevertheless, it seems that the harmful effects of exposing to X-ray are exaggerated. The serious harmful effects mentioned above are just the results of high dosage in a short period of time.
There are different kinds of X-rays, one type is used in scanning or diagnosis, one type is used in treating cancer. The energy stored in different types of X-rays is different. For normal X-ray scanning, the dosage is extremely small. The absorbed dose of X-ray is measured in rad, which 1 rad = 10 × 10–3 J kg–1 = 10 – 2 J kg-1. If a pregnant woman is having a chest X-ray, the estimated fetal dose is around 60 millirads, the dose is around 290 millirads for an abdominal X-ray. This is quite a low value, as the dose from the radiation from outer space is around 90-100 millirads. In fact, the risk of the fetus having eye abnormalities or mental retardation increases only when the dosage exceeds 10 rads, therefore it is very rare that pregnant women suffer from harmful effects by the X-ray radiation. According to the American Academy of Family Physicians, generally X-rays are safe even for pregnant women, and according to radiologists, no single diagnostic x-ray has a radiation dose significant enough to cause adverse effects in a developing embryo or fetus.
Q. The author of this passage is LEAST likely to support the view that
Direction: Read the following passage and answer the question that follows:
In case that the pregnant woman is in early pregnancy or obese, she can undergo transvaginal sonography, which a probe is placed in the woman’s vagina. Sometimes the test is also carried out if the pregnant woman has got abnormal vaginal bleeding or pelvic pain. This type of sonography has the similar principle as the ultrasonography mentioned above. Some mothers may want to see the heartbeat of their babies, they can carry out the Doppler sonography. It has basically the same principle as the ultrasonography except the ultrasound is further enhanced by Doppler Effect. Generally the fetus’s heartbeat can be detected after 7 weeks of gestation, thus the blood flow of the fetus can be detected as well.
The blood flows in a circulation in the body of the fetus, the Doppler sonography can thus detect the change in directions of blood flow by Doppler effect and see if the circulation is normal or not. This can be done by measuring the change in the frequency received in the transceiver. In fact there are a few more types of prenatal checkup, such as amniocentesis and chronic villus sampling. Nonetheless, the ultrasonography is the safest way for diagnosis. The ultrasonography only involves a transducer placing outside the mother’s abdomen, while amniocentesis and chronic villus sampling require mechanical penetration and sampling inside the mother’s uterus or abdomen, this increases the risk of miscarriage during the tests.
Despite this fact, ultrasonography can only give an early diagnosis of the mothers and fetuses, it cannot treat anomalies or genetic diseases. According to the test conducted by RADIUS study group in 1993, researchers found that obtaining sonography has no significantly negative effect on reducing perinatal morbidity or mortality among the fetuses or the mothers. Moreover, the detection of anomalies actually did not alter the outcome of newborn babies. Therefore it is important to acknowledge that ultrasonography is just a test whether the fetuses are healthy, but not a treatment to anomalies. X-ray is an electromagnetic wave with a wavelength ranged from 0.01 to 10 nanometers (0.01 – 10 x 10 – 9m). It has a speed of 3 × 108 ms-1 in vacuum. In fact, X-ray is commonly used in medical treatments, such as radiation therapy of cancer and medical imaging technology.
X-ray is produced in an X-ray tube. In the X-ray tube, electrons are accelerated by applying a high voltage. Electrons then collide with a metal, and the sudden deceleration of electrons results in the emission of X-ray. X-ray has high ionizing power, thus there are many people worrying about the harmful effects of having an X-ray diagnosis, especially pregnant women. It is true that a very high dose of radiation from X-ray may result in radiation sickness. Prolonged and continuous exposure to X-ray also increases the risk of cancer development, and in pregnant women, there may also be a risk for the fetus to develop childhood cancer or even miscarriage. Nevertheless, it seems that the harmful effects of exposing to X-ray are exaggerated. The serious harmful effects mentioned above are just the results of high dosage in a short period of time.
There are different kinds of X-rays, one type is used in scanning or diagnosis, one type is used in treating cancer. The energy stored in different types of X-rays is different. For normal X-ray scanning, the dosage is extremely small. The absorbed dose of X-ray is measured in rad, which 1 rad = 10 × 10–3 J kg–1 = 10 – 2 J kg-1. If a pregnant woman is having a chest X-ray, the estimated fetal dose is around 60 millirads, the dose is around 290 millirads for an abdominal X-ray. This is quite a low value, as the dose from the radiation from outer space is around 90-100 millirads. In fact, the risk of the fetus having eye abnormalities or mental retardation increases only when the dosage exceeds 10 rads, therefore it is very rare that pregnant women suffer from harmful effects by the X-ray radiation. According to the American Academy of Family Physicians, generally X-rays are safe even for pregnant women, and according to radiologists, no single diagnostic x-ray has a radiation dose significant enough to cause adverse effects in a developing embryo or fetus.
Q. Which of the following best describes the primary difference between ultrasonography and Doppler sonography in prenatal checkups?