The prevailing theory of our Moon's origin is that it was created by a giant impact between a large planet-like object and the proto-Earth very early in the evolution of our solar system. The energy of this impact was sufficiently high that the Moon formed from melted material that began with a deep liquid magma ocean. As the Moon cooled, this magma ocean solidified into different mineral components, the lightest of which floated upwards to form the oldest crust. Although samples of this presumed ancient crust were brought back to Earth by the Apollo 16 mission in 1972, it was not until recently that scientists could successfully date them. Recent analysis of one of the samples, a rock called ferroan anorthosite or FAN, which is believed to be the oldest of the Moon's crustal rocks, has given scientists new insights into the formation of the Moon, suggesting that the Moon may be much younger than currently believed.
The sample that had been carefully stored at NASA’s Johnson Space Center had to be extensively pre-cleaned to remove terrestrial contamination. Once the sample was contamination free, the researchers were able to study it. The team analyzed the isotopes of the elements lead and neodymium to place the age of the sample at 4.36 billion years. This figure is significantly younger than earlier estimates of the Moon's age that range to nearly as old as the age of the solar system itself at 4.567 billion years. The new, younger age obtained for the oldest lunar crust is similar to ages obtained for the oldest terrestrial minerals -- zircons from Western Australia -- suggesting that the oldest crust on both Earth and the Moon formed at approximately the same time.
This study is the first in which a single sample of FAN yielded consistent ages from multiple isotope dating techniques. This result strongly suggests that these ages pinpoint the time at which this sample crystallized. The extraordinarily young age of this lunar sample either means that the Moon solidified significantly later than previous estimates -- and therefore the moon itself is much younger than previously believed -- or that this sample does not represent a crystallization product of the original magma ocean. Either scenario requires major revision to existing models for the formation of the Moon.
Which of the following captures the primary purpose of the passage?
The prevailing theory of our Moon's origin is that it was created by a giant impact between a large planet-like object and the proto-Earth very early in the evolution of our solar system. The energy of this impact was sufficiently high that the Moon formed from melted material that began with a deep liquid magma ocean. As the Moon cooled, this magma ocean solidified into different mineral components, the lightest of which floated upwards to form the oldest crust. Although samples of this presumed ancient crust were brought back to Earth by the Apollo 16 mission in 1972, it was not until recently that scientists could successfully date them. Recent analysis of one of the samples, a rock called ferroan anorthosite or FAN, which is believed to be the oldest of the Moon's crustal rocks, has given scientists new insights into the formation of the Moon, suggesting that the Moon may be much younger than currently believed.
The sample that had been carefully stored at NASA’s Johnson Space Center had to be extensively pre-cleaned to remove terrestrial contamination. Once the sample was contamination free, the researchers were able to study it. The team analyzed the isotopes of the elements lead and neodymium to place the age of the sample at 4.36 billion years. This figure is significantly younger than earlier estimates of the Moon's age that range to nearly as old as the age of the solar system itself at 4.567 billion years. The new, younger age obtained for the oldest lunar crust is similar to ages obtained for the oldest terrestrial minerals -- zircons from Western Australia -- suggesting that the oldest crust on both Earth and the Moon formed at approximately the same time.
This study is the first in which a single sample of FAN yielded consistent ages from multiple isotope dating techniques. This result strongly suggests that these ages pinpoint the time at which this sample crystallized. The extraordinarily young age of this lunar sample either means that the Moon solidified significantly later than previous estimates -- and therefore the moon itself is much younger than previously believed -- or that this sample does not represent a crystallization product of the original magma ocean. Either scenario requires major revision to existing models for the formation of the Moon.
The passage states which of the following?
1 Crore+ students have signed up on EduRev. Have you? Download the App |
The prevailing theory of our Moon's origin is that it was created by a giant impact between a large planet-like object and the proto-Earth very early in the evolution of our solar system. The energy of this impact was sufficiently high that the Moon formed from melted material that began with a deep liquid magma ocean. As the Moon cooled, this magma ocean solidified into different mineral components, the lightest of which floated upwards to form the oldest crust. Although samples of this presumed ancient crust were brought back to Earth by the Apollo 16 mission in 1972, it was not until recently that scientists could successfully date them. Recent analysis of one of the samples, a rock called ferroan anorthosite or FAN, which is believed to be the oldest of the Moon's crustal rocks, has given scientists new insights into the formation of the Moon, suggesting that the Moon may be much younger than currently believed.
The sample that had been carefully stored at NASA’s Johnson Space Center had to be extensively pre-cleaned to remove terrestrial contamination. Once the sample was contamination free, the researchers were able to study it. The team analyzed the isotopes of the elements lead and neodymium to place the age of the sample at 4.36 billion years. This figure is significantly younger than earlier estimates of the Moon's age that range to nearly as old as the age of the solar system itself at 4.567 billion years. The new, younger age obtained for the oldest lunar crust is similar to ages obtained for the oldest terrestrial minerals -- zircons from Western Australia -- suggesting that the oldest crust on both Earth and the Moon formed at approximately the same time.
This study is the first in which a single sample of FAN yielded consistent ages from multiple isotope dating techniques. This result strongly suggests that these ages pinpoint the time at which this sample crystallized. The extraordinarily young age of this lunar sample either means that the Moon solidified significantly later than previous estimates -- and therefore the moon itself is much younger than previously believed -- or that this sample does not represent a crystallization product of the original magma ocean. Either scenario requires major revision to existing models for the formation of the Moon.
The passage supports which of the following statements?
The progress of astronomy has always been closely associated with the development and application of mechanical processes and skill. Before the seventeenth century, the size of the planets could not be measured, the satellites of the planets except the Earth’s moon were unknown, the phases of Mercury and Venus were merely conjectured, and accurate positions of the sun, moon, and planets among the stars, and of the stars among themselves, were impossible all because there were no telescopes. More than a half century elapsed after the invention of the telescope before Picard combined it with a graduated circle in such a way that the measurement of angles was greatly improved. Then arose the necessity for accurate time. Although Galileo had learned the principles governing the pendulum, astronomy had to wait for the mechanical genius of Huygens before a satisfactory clock was invented around 1657.
Nearly all the large reflecting telescopes ever built were constructed by astronomers who possessed great facility in practical mechanics. The rapid and significant advances in nearly all departments of astronomy in the past century would not have been possible except through the skill and patience of glassmakers, opticians, and engineers. The principles of spectrum analysis as formulated by Kirchho allowed for the discovery of the elements composing every heavenly body. The deftness of Wollaston showed that light could not be analyzed unless it first passed through a very narrow slit. Even in our modern day the power of the telescope and spectroscope has been vastly extended by the optical skill and mechanical dexterity of individuals such as Rowland, Hastings and Brashear, all Americans.
An observatory site should have a fairly unobstructed horizon, as much freedom from cloud as possible, good foundations for the instruments, and a very steady atmosphere. To understand the necessity of a steady atmosphere, look at some distant outdoor object through a window on a hot summer day. The object appears blurry and wavering. Similarly, currents of warm air continually rise from the earth to upper regions of the atmosphere, and colder air comes down and rushes underneath. Although these atmospheric movements are often invisible to the eye, their effect is plainly visible in the telescope as a blurry distortion. In order for an ideally-designed telescope to perform perfectly, it must be located in a perfect atmosphere. Otherwise its full power cannot be employed. All hindrances of atmosphere are most advantageously avoided in arid or desert regions of the globe, at elevations of 3000 to 10,000 feet above sea level. Higher mountains have as yet been only partially investigated, and it is not known whether difficulties of occupying them permanently would more than counterbalance the gain which greater elevation would afford.
According to the passage, which of the following is NOT true about the telescope?
The progress of astronomy has always been closely associated with the development and application of mechanical processes and skill. Before the seventeenth century, the size of the planets could not be measured, the satellites of the planets except the Earth’s moon were unknown, the phases of Mercury and Venus were merely conjectured, and accurate positions of the sun, moon, and planets among the stars, and of the stars among themselves, were impossible all because there were no telescopes. More than a half century elapsed after the invention of the telescope before Picard combined it with a graduated circle in such a way that the measurement of angles was greatly improved. Then arose the necessity for accurate time. Although Galileo had learned the principles governing the pendulum, astronomy had to wait for the mechanical genius of Huygens before a satisfactory clock was invented around 1657.
Nearly all the large reflecting telescopes ever built were constructed by astronomers who possessed great facility in practical mechanics. The rapid and significant advances in nearly all departments of astronomy in the past century would not have been possible except through the skill and patience of glassmakers, opticians, and engineers. The principles of spectrum analysis as formulated by Kirchho allowed for the discovery of the elements composing every heavenly body. The deftness of Wollaston showed that light could not be analyzed unless it first passed through a very narrow slit. Even in our modern day the power of the telescope and spectroscope has been vastly extended by the optical skill and mechanical dexterity of individuals such as Rowland, Hastings and Brashear, all Americans.
An observatory site should have a fairly unobstructed horizon, as much freedom from cloud as possible, good foundations for the instruments, and a very steady atmosphere. To understand the necessity of a steady atmosphere, look at some distant outdoor object through a window on a hot summer day. The object appears blurry and wavering. Similarly, currents of warm air continually rise from the earth to upper regions of the atmosphere, and colder air comes down and rushes underneath. Although these atmospheric movements are often invisible to the eye, their effect is plainly visible in the telescope as a blurry distortion. In order for an ideally-designed telescope to perform perfectly, it must be located in a perfect atmosphere. Otherwise its full power cannot be employed. All hindrances of atmosphere are most advantageously avoided in arid or desert regions of the globe, at elevations of 3000 to 10,000 feet above sea level. Higher mountains have as yet been only partially investigated, and it is not known whether difficulties of occupying them permanently would more than counterbalance the gain which greater elevation would afford.
The primary purpose of this passage is to
The progress of astronomy has always been closely associated with the development and application of mechanical processes and skill. Before the seventeenth century, the size of the planets could not be measured, the satellites of the planets except the Earth’s moon were unknown, the phases of Mercury and Venus were merely conjectured, and accurate positions of the sun, moon, and planets among the stars, and of the stars among themselves, were impossible all because there were no telescopes. More than a half century elapsed after the invention of the telescope before Picard combined it with a graduated circle in such a way that the measurement of angles was greatly improved. Then arose the necessity for accurate time. Although Galileo had learned the principles governing the pendulum, astronomy had to wait for the mechanical genius of Huygens before a satisfactory clock was invented around 1657.
Nearly all the large reflecting telescopes ever built were constructed by astronomers who possessed great facility in practical mechanics. The rapid and significant advances in nearly all departments of astronomy in the past century would not have been possible except through the skill and patience of glassmakers, opticians, and engineers. The principles of spectrum analysis as formulated by Kirchho allowed for the discovery of the elements composing every heavenly body. The deftness of Wollaston showed that light could not be analyzed unless it first passed through a very narrow slit. Even in our modern day the power of the telescope and spectroscope has been vastly extended by the optical skill and mechanical dexterity of individuals such as Rowland, Hastings and Brashear, all Americans.
An observatory site should have a fairly unobstructed horizon, as much freedom from cloud as possible, good foundations for the instruments, and a very steady atmosphere. To understand the necessity of a steady atmosphere, look at some distant outdoor object through a window on a hot summer day. The object appears blurry and wavering. Similarly, currents of warm air continually rise from the earth to upper regions of the atmosphere, and colder air comes down and rushes underneath. Although these atmospheric movements are often invisible to the eye, their effect is plainly visible in the telescope as a blurry distortion. In order for an ideally-designed telescope to perform perfectly, it must be located in a perfect atmosphere. Otherwise its full power cannot be employed. All hindrances of atmosphere are most advantageously avoided in arid or desert regions of the globe, at elevations of 3000 to 10,000 feet above sea level. Higher mountains have as yet been only partially investigated, and it is not known whether difficulties of occupying them permanently would more than counterbalance the gain which greater elevation would afford.
Which of the following is the function of the first paragraph?
The progress of astronomy has always been closely associated with the development and application of mechanical processes and skill. Before the seventeenth century, the size of the planets could not be measured, the satellites of the planets except the Earth’s moon were unknown, the phases of Mercury and Venus were merely conjectured, and accurate positions of the sun, moon, and planets among the stars, and of the stars among themselves, were impossible all because there were no telescopes. More than a half century elapsed after the invention of the telescope before Picard combined it with a graduated circle in such a way that the measurement of angles was greatly improved. Then arose the necessity for accurate time. Although Galileo had learned the principles governing the pendulum, astronomy had to wait for the mechanical genius of Huygens before a satisfactory clock was invented around 1657.
Nearly all the large reflecting telescopes ever built were constructed by astronomers who possessed great facility in practical mechanics. The rapid and significant advances in nearly all departments of astronomy in the past century would not have been possible except through the skill and patience of glassmakers, opticians, and engineers. The principles of spectrum analysis as formulated by Kirchho allowed for the discovery of the elements composing every heavenly body. The deftness of Wollaston showed that light could not be analyzed unless it first passed through a very narrow slit. Even in our modern day the power of the telescope and spectroscope has been vastly extended by the optical skill and mechanical dexterity of individuals such as Rowland, Hastings and Brashear, all Americans.
An observatory site should have a fairly unobstructed horizon, as much freedom from cloud as possible, good foundations for the instruments, and a very steady atmosphere. To understand the necessity of a steady atmosphere, look at some distant outdoor object through a window on a hot summer day. The object appears blurry and wavering. Similarly, currents of warm air continually rise from the earth to upper regions of the atmosphere, and colder air comes down and rushes underneath. Although these atmospheric movements are often invisible to the eye, their effect is plainly visible in the telescope as a blurry distortion. In order for an ideally-designed telescope to perform perfectly, it must be located in a perfect atmosphere. Otherwise its full power cannot be employed. All hindrances of atmosphere are most advantageously avoided in arid or desert regions of the globe, at elevations of 3000 to 10,000 feet above sea level. Higher mountains have as yet been only partially investigated, and it is not known whether difficulties of occupying them permanently would more than counterbalance the gain which greater elevation would afford.
It can be inferred from the passage that
Researchers bet their bottom dollar on a combination of polar ice cores, tree-rings, geochemistry, and a medieval chronicle little-known in the West to solve one of vulcanology’s most enduring mysteries: which peak blew its top in the mid-13th century, causing a catastrophic eruption that ranks as one of the biggest in the recorded history? As with any investigation, the team had to rule out other suspects as it followed a trail of clues - and even read palms, or at least palm leaves, ultimately finding the culprit of the massive 1257 AD eruption, which the researchers say is Samalas volcano on Lombok Island in Indonesia.
For decades, scientists have been searching for the volcano responsible for the largest spike in sulfate deposits in the last 7,000 years, which were revealed in the ice cores from Greenland and Antarctica. The spike indicated a massive eruption around 1257 that may have sent up to eight times more sulfate into the stratosphere than the 1883 eruption of Karaktau, often held up as an archetype of volcanoes behaving badly. Researchers say the 1257 mystery spew is comparable in scope to a second-century AD eruption in the Taupo Volcanic Zone of New Zealand, known as the most intense historic volcanic event. Multitude of futile attempts for a few decades compelled the researchers to write the project off as “unsolved”. Some thirty years later, one of the researchers’ tips came from Babad Lombok, a 13th century historical record in Old Javanese, written on palm leaves, the chronicle referencing a massive eruption of Samalas that created an enormous caldera.The current research zeroed in on Samalas, part of the Mount Rinjani volcanic complex.
The team was able to accumulate a sizable amount of incriminating evidence, including pyroclastic deposits from the eruption more than 100 feet thick found more than 15 miles from the ruins of the volcano. The range of deposits and the volume suggest that the Samalas eruption exceeded that of the Tambora event in 1815. The team sampled carbonized tree trunks and branches in the Samalas deposit zone and used radiocarbon dating to confirm a mid 13th-century eruption. Reviewing wind patterns, researchers were even able to narrow the timeframe for the eruption. The distribution, to the west, of volcanic ash and other ejecta from Samalas suggest that the dry season’s easterly trade winds were prevalent, putting the eruption window between May and October of 1257.
The author is primarily concerned with:
Researchers bet their bottom dollar on a combination of polar ice cores, tree-rings, geochemistry, and a medieval chronicle little-known in the West to solve one of vulcanology’s most enduring mysteries: which peak blew its top in the mid-13th century, causing a catastrophic eruption that ranks as one of the biggest in the recorded history? As with any investigation, the team had to rule out other suspects as it followed a trail of clues - and even read palms, or at least palm leaves, ultimately finding the culprit of the massive 1257 AD eruption, which the researchers say is Samalas volcano on Lombok Island in Indonesia.
For decades, scientists have been searching for the volcano responsible for the largest spike in sulfate deposits in the last 7,000 years, which were revealed in the ice cores from Greenland and Antarctica. The spike indicated a massive eruption around 1257 that may have sent up to eight times more sulfate into the stratosphere than the 1883 eruption of Karaktau, often held up as an archetype of volcanoes behaving badly. Researchers say the 1257 mystery spew is comparable in scope to a second-century AD eruption in the Taupo Volcanic Zone of New Zealand, known as the most intense historic volcanic event. Multitude of futile attempts for a few decades compelled the researchers to write the project off as “unsolved”. Some thirty years later, one of the researchers’ tips came from Babad Lombok, a 13th century historical record in Old Javanese, written on palm leaves, the chronicle referencing a massive eruption of Samalas that created an enormous caldera.The current research zeroed in on Samalas, part of the Mount Rinjani volcanic complex.
The team was able to accumulate a sizable amount of incriminating evidence, including pyroclastic deposits from the eruption more than 100 feet thick found more than 15 miles from the ruins of the volcano. The range of deposits and the volume suggest that the Samalas eruption exceeded that of the Tambora event in 1815. The team sampled carbonized tree trunks and branches in the Samalas deposit zone and used radiocarbon dating to confirm a mid 13th-century eruption. Reviewing wind patterns, researchers were even able to narrow the timeframe for the eruption. The distribution, to the west, of volcanic ash and other ejecta from Samalas suggest that the dry season’s easterly trade winds were prevalent, putting the eruption window between May and October of 1257.
The author of the passage alludes to the discovery made in Greenland and Antarctica in order to
Researchers bet their bottom dollar on a combination of polar ice cores, tree-rings, geochemistry, and a medieval chronicle little-known in the West to solve one of vulcanology’s most enduring mysteries: which peak blew its top in the mid-13th century, causing a catastrophic eruption that ranks as one of the biggest in the recorded history? As with any investigation, the team had to rule out other suspects as it followed a trail of clues - and even read palms, or at least palm leaves, ultimately finding the culprit of the massive 1257 AD eruption, which the researchers say is Samalas volcano on Lombok Island in Indonesia.
For decades, scientists have been searching for the volcano responsible for the largest spike in sulfate deposits in the last 7,000 years, which were revealed in the ice cores from Greenland and Antarctica. The spike indicated a massive eruption around 1257 that may have sent up to eight times more sulfate into the stratosphere than the 1883 eruption of Karaktau, often held up as an archetype of volcanoes behaving badly. Researchers say the 1257 mystery spew is comparable in scope to a second-century AD eruption in the Taupo Volcanic Zone of New Zealand, known as the most intense historic volcanic event. Multitude of futile attempts for a few decades compelled the researchers to write the project off as “unsolved”. Some thirty years later, one of the researchers’ tips came from Babad Lombok, a 13th century historical record in Old Javanese, written on palm leaves, the chronicle referencing a massive eruption of Samalas that created an enormous caldera.The current research zeroed in on Samalas, part of the Mount Rinjani volcanic complex.
The team was able to accumulate a sizable amount of incriminating evidence, including pyroclastic deposits from the eruption more than 100 feet thick found more than 15 miles from the ruins of the volcano. The range of deposits and the volume suggest that the Samalas eruption exceeded that of the Tambora event in 1815. The team sampled carbonized tree trunks and branches in the Samalas deposit zone and used radiocarbon dating to confirm a mid 13th-century eruption. Reviewing wind patterns, researchers were even able to narrow the timeframe for the eruption. The distribution, to the west, of volcanic ash and other ejecta from Samalas suggest that the dry season’s easterly trade winds were prevalent, putting the eruption window between May and October of 1257.
Which of the following statements about “the most intense historic volcanic event” is supported by information in the passage?
Origami is capable of turning a simple sheet of paper into a pretty paper crane, but the principles behind the paper-folding art can also be applied to making a microfluidic device for a blood test, or for storing a satellite's solar panel in a rocket's cargo bay. A team of researchers is turning kirigami, a related art form that allows the paper to be cut, into a technique that can be applied equally to structures on those vastly divergent length scales. The researchers lay out the rules for folding and cutting a hexagonal lattice, a structure made from strips of material that cross over each other with spaces between, into a wide variety of useful three-dimensional shapes.
A hexagonal lattice may seem like an odd choice for a starting point, but the researchers think that the pattern has advantages over a seemingly simpler tessellation, such as one made from squares; for instance, it is easier to fill a space with a hexagonal lattice and move from 2-D to 3-D. Starting from a flat hexagonal grid on a sheet of paper, the researchers outlined the fundamental cuts and folds that allow the resulting shape to keep the same proportions of the initial lattice, even if some of the material is removed. This is a critical quality for making the transition from paper to materials that might be used in real-world applications.
Having a set of rules that draws on fundamental mathematical principles means that the kirigami approach can be applied equally across length scales, and with almost any material that can be selected on the basis of its relevance to the ultimate application, whether it is in nanotechnology, architecture, or aerospace.The rules also guarantee that "modules," basic shapes such as channels that can direct the flow of fluids, can be combined into more complex ones. Kirigami is particularly attractive for nanoscale applications, where the simplest, most space-efficient shapes are necessary, and self-folding materials would circumvent some of the fabrication challenges inherent in working with other materials at such small scales.
Which of the following most aptly describes the function of the second paragraph?
Origami is capable of turning a simple sheet of paper into a pretty paper crane, but the principles behind the paper-folding art can also be applied to making a microfluidic device for a blood test, or for storing a satellite's solar panel in a rocket's cargo bay. A team of researchers is turning kirigami, a related art form that allows the paper to be cut, into a technique that can be applied equally to structures on those vastly divergent length scales. The researchers lay out the rules for folding and cutting a hexagonal lattice, a structure made from strips of material that cross over each other with spaces between, into a wide variety of useful three-dimensional shapes.
A hexagonal lattice may seem like an odd choice for a starting point, but the researchers think that the pattern has advantages over a seemingly simpler tessellation, such as one made from squares; for instance, it is easier to fill a space with a hexagonal lattice and move from 2-D to 3-D. Starting from a flat hexagonal grid on a sheet of paper, the researchers outlined the fundamental cuts and folds that allow the resulting shape to keep the same proportions of the initial lattice, even if some of the material is removed. This is a critical quality for making the transition from paper to materials that might be used in real-world applications.
Having a set of rules that draws on fundamental mathematical principles means that the kirigami approach can be applied equally across length scales, and with almost any material that can be selected on the basis of its relevance to the ultimate application, whether it is in nanotechnology, architecture, or aerospace.The rules also guarantee that "modules," basic shapes such as channels that can direct the flow of fluids, can be combined into more complex ones. Kirigami is particularly attractive for nanoscale applications, where the simplest, most space-efficient shapes are necessary, and self-folding materials would circumvent some of the fabrication challenges inherent in working with other materials at such small scales.
Which of the following statements would the author most likely agree with?
Origami is capable of turning a simple sheet of paper into a pretty paper crane, but the principles behind the paper-folding art can also be applied to making a microfluidic device for a blood test, or for storing a satellite's solar panel in a rocket's cargo bay. A team of researchers is turning kirigami, a related art form that allows the paper to be cut, into a technique that can be applied equally to structures on those vastly divergent length scales. The researchers lay out the rules for folding and cutting a hexagonal lattice, a structure made from strips of material that cross over each other with spaces between, into a wide variety of useful three-dimensional shapes.
A hexagonal lattice may seem like an odd choice for a starting point, but the researchers think that the pattern has advantages over a seemingly simpler tessellation, such as one made from squares; for instance, it is easier to fill a space with a hexagonal lattice and move from 2-D to 3-D. Starting from a flat hexagonal grid on a sheet of paper, the researchers outlined the fundamental cuts and folds that allow the resulting shape to keep the same proportions of the initial lattice, even if some of the material is removed. This is a critical quality for making the transition from paper to materials that might be used in real-world applications.
Having a set of rules that draws on fundamental mathematical principles means that the kirigami approach can be applied equally across length scales, and with almost any material that can be selected on the basis of its relevance to the ultimate application, whether it is in nanotechnology, architecture, or aerospace.The rules also guarantee that "modules," basic shapes such as channels that can direct the flow of fluids, can be combined into more complex ones. Kirigami is particularly attractive for nanoscale applications, where the simplest, most space-efficient shapes are necessary, and self-folding materials would circumvent some of the fabrication challenges inherent in working with other materials at such small scales.
The author is primarily concerned with
In liquids, molecules move about freely yet tend to cling together. This tendency to cling together, which is not noticeable in gases, is characteristic of liquids and especially of solids. It is the cause of viscosity and is readily detected in a variety of ways. For instance, not only do liquid molecules cling together to form drops and streams, but they cling to the molecules of solids as well, as is shown by the wet surface of an object that has been dipped in water. The attraction of like molecules for one another is called “cohesion,” while the attraction of unlike molecules is called “adhesion,” although the force is the same whether the molecules are alike or unlike. It is the former that causes drops of water to form and that holds iron, copper, and other solids so rigidly together.
The adhesion of glue to other objects is well known. Paint also "sticks" well. Sometimes the "joint" where two boards are glued together is stronger than the board itself. The force of attraction between molecules has been studied carefully. The attraction acts only through very short distances. The attraction even in liquids is considerable and may be measured. The cohesion of liquids is also indicated by the tendency of films to assume the smallest possible surface. Soap bubble films show this readily. A soap bubble takes its spherical shape because this form holds the confined air within the smallest possible surface. A drop of liquid is spherical for the same reason. The surface of water acts as if covered by a film that coheres more strongly than the water beneath it. This is shown by the fact that a steel needle or a thin strip of metal may be floated upon the surface of water. It is supported by the surface film. If the film breaks the needle sinks. This film also supports the little water bugs seen running over the surface of a quiet pond in summer. The surface film is stronger in some liquids than in others. This may be shown by taking water, colored so that it can be seen, placing a thin layer of it on a white surface and dropping alcohol upon it. Wherever the alcohol drops, the water is seen to pull away from it, leaving a bare space over which the alcohol has been spread. This indicates that the alcohol has the weaker film.
Which of the following is the function of the first paragraph?
Information is the essence of universe and means distinction between things. It is the very basic principle of physics that distinctions never disappear even though they might get scrambled or mixed away even after a seemingly irreversible change – say a magazine gets dissolved into pulp at a recycling plan, the information on the pages of the magazines will be re-organized and not eliminated and in theory the decay can be reversed; the pulp reconstructed into words and photographs. The only exception to this principle in physics is if the magazine were thrown into a black hole, a singular object in this regard, since nothing can emerge out of it after all. Even after Stephen Hawking showed in 1975 that black holes can radiate away matter and energy, the radiation seemed devoid of any structure, indicating that all information is lost in a black hole – a conclusion that has been hotly contested by physicists all over the world who argue that the entire structure of theoretical physics will disintegrate once you accept the notion that information can be lost, even if in a black hole.
Even though Hawking was not easily convinced, the physicists adopted a new theory called the holograph principle that states that when an object falls inside a black hole the stuff inside it may be lost but the objects information may be imprinted on the surface of black hole and with the right tools you may reconstruct the magazine from the black hole just as you would have reconstructed it from the pulp. This principle which may sound like an accounting trick has some serious implications if true. It implies that all information about 3 dimensional objects is stored in 2 dimensions and that there is a limit to how much information can be stored on a given surface area. While this theory plugs a key gap in Hawkins assertion its corollaries spring some interesting implications that may have a tough time standing up to the scrutiny.
According to the passage, prior to 1975 it was believed that black holes were unique because:
Information is the essence of universe and means distinction between things. It is the very basic principle of physics that distinctions never disappear even though they might get scrambled or mixed away even after a seemingly irreversible change – say a magazine gets dissolved into pulp at a recycling plan, the information on the pages of the magazines will be re-organized and not eliminated and in theory the decay can be reversed; the pulp reconstructed into words and photographs. The only exception to this principle in physics is if the magazine were thrown into a black hole, a singular object in this regard, since nothing can emerge out of it after all. Even after Stephen Hawking showed in 1975 that black holes can radiate away matter and energy, the radiation seemed devoid of any structure, indicating that all information is lost in a black hole – a conclusion that has been hotly contested by physicists all over the world who argue that the entire structure of theoretical physics will disintegrate once you accept the notion that information can be lost, even if in a black hole.
Even though Hawking was not easily convinced, the physicists adopted a new theory called the holograph principle that states that when an object falls inside a black hole the stuff inside it may be lost but the objects information may be imprinted on the surface of black hole and with the right tools you may reconstruct the magazine from the black hole just as you would have reconstructed it from the pulp. This principle which may sound like an accounting trick has some serious implications if true. It implies that all information about 3 dimensional objects is stored in 2 dimensions and that there is a limit to how much information can be stored on a given surface area. While this theory plugs a key gap in Hawkins assertion its corollaries spring some interesting implications that may have a tough time standing up to the scrutiny.
Why does the author imply that the holographic principle “may sound like an accounting trick”?
Information is the essence of universe and means distinction between things. It is the very basic principle of physics that distinctions never disappear even though they might get scrambled or mixed away even after a seemingly irreversible change – say a magazine gets dissolved into pulp at a recycling plan, the information on the pages of the magazines will be re-organized and not eliminated and in theory the decay can be reversed; the pulp reconstructed into words and photographs. The only exception to this principle in physics is if the magazine were thrown into a black hole, a singular object in this regard, since nothing can emerge out of it after all. Even after Stephen Hawking showed in 1975 that black holes can radiate away matter and energy, the radiation seemed devoid of any structure, indicating that all information is lost in a black hole – a conclusion that has been hotly contested by physicists all over the world who argue that the entire structure of theoretical physics will disintegrate once you accept the notion that information can be lost, even if in a black hole.
Even though Hawking was not easily convinced, the physicists adopted a new theory called the holograph principle that states that when an object falls inside a black hole the stuff inside it may be lost but the objects information may be imprinted on the surface of black hole and with the right tools you may reconstruct the magazine from the black hole just as you would have reconstructed it from the pulp. This principle which may sound like an accounting trick has some serious implications if true. It implies that all information about 3 dimensional objects is stored in 2 dimensions and that there is a limit to how much information can be stored on a given surface area. While this theory plugs a key gap in Hawkins assertion its corollaries spring some interesting implications that may have a tough time standing up to the scrutiny.
Which of the following best describes author’s feelings regarding Holograph principle?
Information is the essence of universe and means distinction between things. It is the very basic principle of physics that distinctions never disappear even though they might get scrambled or mixed away even after a seemingly irreversible change – say a magazine gets dissolved into pulp at a recycling plan, the information on the pages of the magazines will be re-organized and not eliminated and in theory the decay can be reversed; the pulp reconstructed into words and photographs. The only exception to this principle in physics is if the magazine were thrown into a black hole, a singular object in this regard, since nothing can emerge out of it after all. Even after Stephen Hawking showed in 1975 that black holes can radiate away matter and energy, the radiation seemed devoid of any structure, indicating that all information is lost in a black hole – a conclusion that has been hotly contested by physicists all over the world who argue that the entire structure of theoretical physics will disintegrate once you accept the notion that information can be lost, even if in a black hole.
Even though Hawking was not easily convinced, the physicists adopted a new theory called the holograph principle that states that when an object falls inside a black hole the stuff inside it may be lost but the objects information may be imprinted on the surface of black hole and with the right tools you may reconstruct the magazine from the black hole just as you would have reconstructed it from the pulp. This principle which may sound like an accounting trick has some serious implications if true. It implies that all information about 3 dimensional objects is stored in 2 dimensions and that there is a limit to how much information can be stored on a given surface area. While this theory plugs a key gap in Hawkins assertion its corollaries spring some interesting implications that may have a tough time standing up to the scrutiny.
According to the passage, the hotly contested debate about black holes was: