CAT Exam  >  CAT Questions  >  Group QuestionRead the passage carefully and ... Start Learning for Free
Group Question
Read the passage carefully and answer the questions that follow.

In December 2010 I appeared on John Stossel’s television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase. Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase — the market average — a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade ‘more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period aren’t the same ones who win in the next period.’
Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009, finding that only 13 beat the market average. Equating managed fund directors to ‘snake-oil salesmen’, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They can’t. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: ‘Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!’ 
Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his ‘Pickens Plan’ of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didn’t, plummeting as the drilling industry’s ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see. Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than ‘a dart-throwing chimpanzee’. There was one significant factor in greater prediction success, however, and that was cognitive style: ‘foxes’ who know a little about many things do better than ‘hedgehogs’ who know a lot about one area of expertise. Low scorers, Tetlock wrote, were ‘thinkers who “know one big thing”, aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it”, and express considerable confidence that they are already pretty proficient forecasters.’ High scorers in the study were ‘thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.’ Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I don’t make predictions and why I never will.
 
 
Q. Match the person in the left column to a description of them  in the right column.
  • a)
    I-A, ll-B, lll-D, IV-C
  • b)
    I-A, ll-D, lll-B, IV-C
  • c)
    l-C, ll-A, lll-B, IV-D
  • d)
    l-C, ll-B, lll-D, IV-A
Correct answer is option 'D'. Can you explain this answer?
Verified Answer
Group QuestionRead the passage carefully and answer the questions that...
We can infer from paragraph 1 that John Stossel is the presenter of a television special on Fox Business News, so l-C. Eliminate options 1 and 2.
In paragraph 4, T. Boone Pickens is said to be the chair of BP Capital Management, so ll-B. Eliminate option 3.
In paragraph 5, Philip E. Tetlock is stated to be a professor at the University of California, Berkeley, so lll-D.
According to paragraph 2, Burton G. Malkiel is an economist from Princeton University, so IV-A.
Option 4 correctly matches the people to their descriptions. Hence, the correct answer is option 4.
View all questions of this test
Explore Courses for CAT exam

Similar CAT Doubts

In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trad e), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What is the difference between foxes and hedgehogs?I. Foxes know many little things, while hedgehogs know one big thing.II. Foxes know one big thing, while hedgehogs know many little things.III. Foxes think of themselves as good predictors, while hedgehogs think the opposite of themselves.IV. Foxes do not boast that they are good predictors, while hedgehogs think of themselves as highly skilled.

In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trad e), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.

InstructionsRead the following passage and answer the questions that follow:The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.Q.In the first paragraph, the author uses “lie” to refer to the

DIRECTIONS: Read the passage and answer the questions based on it.The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of singleuse plastic that should have been avoided in the first place.As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood.Beginning in the 1950s, big beverage companies like CocaCola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the firstcorporate green washing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. The greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement.So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.(2018)Q.In the first paragraph, the author uses “lie” to refer to the

Directions: Read the following passage and answer the questions.Left to their own devices, relations between Japan and China are bound to improve. Both economies need each other. China is Japan's single largest trading partner and bilateral trade hit a record $345bn last year. But things in the East China Sea are rarely left to their own devices. A move by the Japanese government to defuse an attempt by nationalists to buy disputed islands in fish- and gas-rich seas, by buying them itself, has led to six days of demonstrations in China. Japanese cars and car dealerships have been attacked, factories have been torched or broken into. Hundreds of Japanese companies and offices have been forced to suspend operation. And the biggest wave of protest since the two countries normalised relations in 1972 - there were demonstrations in 70 Chinese cities - is not over yet.Tuesday is the anniversary of the Japanese attack on China in 1931 that led to the invasion and occupation lasting 14 years. That Japan should use this date above all others to reassert its sovereignty over a group of uninhabited islands is - in Chinese eyes - nothing short of provocation. As 1,000 fishing boats were on their way to the islands the Chinese know as Diaoyu and the Japanese call the Senkaku, the People's Daily warned on Monday that the incident could lead to a full-blown trade boycott.Below the surface, the politics of these mass demonstrations are a good deal more complex. They are undoubtedly officially sanctioned and serve as a useful outlet for popular rage. But whether the protests are more than just diversionary, whether they are an expression of some crisis going on in the transfer of power from one generation of leaders to another, cannot be said with any confidence. It is therefore hard to predict with any accuracy just how far China will take this. Thus far, the flag-waving on the high seas has been ritualistic. Three Chinese marine surveillance ships entered Japan's territorial waters but withdrew again afterwards, and no landings were attempted.In Japan, which is facing its own general election soon, there is a good deal of anxiety. On the one hand, history teaches them that similar anti-Japanese protests in China in 2005 and 2010 were short-lived. On the other, the current prime minister, Yoshihiko Noda, who is unpopular over tax increases, faces his own pressures from those who say Japan should be more forceful in defending its territorial rights. One of those is the son of the governor of Tokyo, Shintaro Ishihara, whose plan to buy the islands in April led to the crisis. Nobuteru Ishihara is one of five candidates for the leadership of the opposition Liberal Democratic party. What both Japan and China lack is a functional mechanism for dealing with these incidents. It is high time one was created.The most apt title for the passage is

Top Courses for CAT

Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer?
Question Description
Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? for CAT 2024 is part of CAT preparation. The Question and answers have been prepared according to the CAT exam syllabus. Information about Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? covers all topics & solutions for CAT 2024 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer?.
Solutions for Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? in English & in Hindi are available as part of our courses for CAT. Download more important topics, notes, lectures and mock test series for CAT Exam by signing up for free.
Here you can find the meaning of Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer?, a detailed solution for Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? has been provided alongside types of Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.a)I-A, ll-B, lll-D, IV-Cb)I-A, ll-D, lll-B, IV-Cc)l-C, ll-A, lll-B, IV-Dd)l-C, ll-B, lll-D, IV-ACorrect answer is option 'D'. Can you explain this answer? tests, examples and also practice CAT tests.
Explore Courses for CAT exam

Top Courses for CAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev