CAT Exam  >  CAT Questions  >  In December 2010 I appeared on John Stossels ... Start Learning for Free
In December 2010 I appeared on John Stossel’s television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase. Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase — the market average — a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade ‘more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period aren’t the same ones who win in the next period.’
Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009, finding that only 13 beat the market average. Equating managed fund directors to ‘snake-oil salesmen’, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They can’t. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: ‘Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!’ 
Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his ‘Pickens Plan’ of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didn’t, plummeting as the drilling industry’s ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see. Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than ‘a dart-throwing chimpanzee’. There was one significant factor in greater prediction success, however, and that was cognitive style: ‘foxes’ who know a little about many things do better than ‘hedgehogs’ who know a lot about one area of expertise. Low scorers, Tetlock wrote, were ‘thinkers who “know one big thing”, aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it”, and express considerable confidence that they are already pretty proficient forecasters.’ High scorers in the study were ‘thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.’ Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I don’t make predictions and why I never will.
 
 
Q. What does Paul Samuelson’s statement ‘Wall Street indexes predicted nine out of the last five recessions!’ imply?
I. Wall Street indexes are too pessimistic in their predictions.
II. Wall Street indexes need to be more optimistic about their predictions.
III. Wall Street indexes are much better at predicting recessions than anyone realizes.
IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.
  • a)
    I & IV
  • b)
    Only IV
  • c)
    Only II
  • d)
    I, II & III
Correct answer is option 'A'. Can you explain this answer?
Verified Answer
In December 2010 I appeared on John Stossels television special on sce...
Refer to the first few paragraphs, and even more so from the following lines - “the main point is that the predictions of Wall Street indexes are not very accurate.” In this particular quotation, Paul Samuelson points out that the predictions were far too pessimistic compared to what actually happened. Therefore, statement I is correct. Eliminate options 2 and 3.
Statement II is not entirely accurate - the predictions of Wall Street indexes need to be more realistic, not necessarily more optimistic. Eliminate option 4.
Statement III completely contradicts the passage, and can therefore be ruled out.
Statement IV correctly captures the main gist of the first few paragraphs.
Therefore only statements I and IV are correct.
Hence, the correct answer is option 1.
View all questions of this test
Explore Courses for CAT exam

Similar CAT Doubts

In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trad e), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What is the difference between foxes and hedgehogs?I. Foxes know many little things, while hedgehogs know one big thing.II. Foxes know one big thing, while hedgehogs know many little things.III. Foxes think of themselves as good predictors, while hedgehogs think the opposite of themselves.IV. Foxes do not boast that they are good predictors, while hedgehogs think of themselves as highly skilled.

Group QuestionRead the passage carefully and answer the questions that follow.In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trad e), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. Match the person in the left column to a description of them in the right column.

InstructionsRead the following passage and answer the questions that follow:The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. . . . At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. . . . [T]he greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement. . . .So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.Q.In the first paragraph, the author uses “lie” to refer to the

DIRECTIONS: Read the passage and answer the questions based on it.The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology. Encouraging individuals to recycle more will never solve the problem of a massive production of singleuse plastic that should have been avoided in the first place.As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood.Beginning in the 1950s, big beverage companies like CocaCola and Anheuser-Busch, along with Phillip Morris and others, formed a non-profit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public. At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the firstcorporate green washing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management. The greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement.So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.(2018)Q.In the first paragraph, the author uses “lie” to refer to the

Read the following passage and answer the questions that follow: The only thing worse than being lied to is not knowing you’re being lied to. It’s true that plastic pollution is a huge problem, of planetary proportions. And it’s true we could all do more to reduce our plastic footprint. The lie is that blame for the plastic problem is wasteful consumers and that changing our individual habits will fix it.Recycling plastic is to saving the Earth what hammering a nail is to halting a falling skyscraper. You struggle to find a place to do it and feel pleased when you succeed. But your effort is wholly inadequate and distracts from the real problem of why the building is collapsing in the first place. The real problem is that single-use plastic—the very idea of producing plastic items like grocery bags, which we use for an average of 12 minutes but can persist in the environment for half a millennium—is an incredibly reckless abuse of technology.Encouraging individuals to recycle more will never solve the problem of a massive production of single-use plastic that should have been avoided in the first place.As an ecologist and evolutionary biologist, I have had a disturbing window into the accumulating literature on the hazards of plastic pollution. Scientists have long recognized that plastics biodegrade slowly, if at all, and pose multiple threats to wildlife through entanglement and consumption. More recent reports highlight dangers posed by absorption of toxic chemicals in the water and by plastic odors that mimic some species’ natural food. Plastics also accumulate up the food chain, and studies now show that we are likely ingesting it ourselves in seafood. . . .Beginning in the 1950s, big beverage companies like Coca-Cola and Anheuser-Busch, along with Phillip Morris and others, formed a nonprofit called Keep America Beautiful. Its mission is/was to educate and encourage environmental stewardship in the public.... At face value, these efforts seem benevolent, but they obscure the real problem, which is the role that corporate polluters play in the plastic problem. This clever misdirection has led journalist and author Heather Rogers to describe Keep America Beautiful as the first corporate greenwashing front, as it has helped shift the public focus to consumer recycling behavior and actively thwarted legislation that would increase extended producer responsibility for waste management.... The greatest success of Keep America Beautiful has been to shift the onus of environmental responsibility onto the public while simultaneously becoming a trusted name in the environmental movement....So what can we do to make responsible use of plastic a reality? First: reject the lie. Litterbugs are not responsible for the global ecological disaster of plastic. Humans can only function to the best of their abilities, given time, mental bandwidth and systemic constraints. Our huge problem with plastic is the result of a permissive legal framework that has allowed the uncontrolled rise of plastic pollution, despite clear evidence of the harm it causes to local communities and the world’s oceans. Recycling is also too hard in most parts of the U.S. and lacks the proper incentives to make it work well.Q. In the first paragraph, the author uses “lie” to refer to the

In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer?
Question Description
In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? for CAT 2024 is part of CAT preparation. The Question and answers have been prepared according to the CAT exam syllabus. Information about In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? covers all topics & solutions for CAT 2024 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer?.
Solutions for In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? in English & in Hindi are available as part of our courses for CAT. Download more important topics, notes, lectures and mock test series for CAT Exam by signing up for free.
Here you can find the meaning of In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer?, a detailed solution for In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? has been provided alongside types of In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice In December 2010 I appeared on John Stossels television special on scepticism on Fox Business News, during which I debunked numerous pseudoscientific beliefs. Stossel added his own scepticism of possible financial pseudoscience in the form of active investment fund managers who claim that they can consistently beat the market. In a dramatic visual demonstration, Stossel threw 30 darts into a page of stocks and compared their performance since January 1,2010, with stock picks of the 10 largest managed funds. Results: Dartboard, a 31 percent increase; managed funds, a 9.5 percent increase.Admitting that he got lucky because of his limited sample size, Stossel explained that had he thrown enough darts to fully represent the market he would have generated a 12 percent increase the market average a full 2.5 percentage points higher than the 10 largest managed funds average increase. As Princeton University economist Burton G. Malkiel elaborated on the show, over the past decade more than two thirds of actively managed funds were beaten by a simple low-cost indexed fund [for example, a mutual fund invested in a large number of stocks], and the active funds that win in one period arent the same ones who win in the next period.Stossel cited a study in the journal Economics and Portfolio Strategy that tracked 452 managed funds from 1990 to 2009,finding that only 13 beat the market average. Equating managed fund directors to snake-oil salesmen, Malkiel said that Wall Street is selling Main Street on the belief that experts can consistently time the market and make accurate predictions of when to buy and sell. They cant. No one can. Not even professional economists and not even for large-scale market indicators. As economics Nobel laureate Paul Samuelson long ago noted in a 1966 Newsweek column: Commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!Even in a given tech area, where you might expect a greater level of specific expertise, economic forecasters fumble. On December 22, 2010, for example, the Wall Street Journal ran a piece on how the great hedge fund financier T. Boone Pickens (chair of BP Capital Management) just abandoned his Pickens Plan of investing in wind energy. Pickens invested $2 billion based on his prediction that the price of natural gas would stay high. It didnt, plummeting as the drilling industrys ability to unlock methane from shale beds improved, a turn of events even an expert such as Pickens failed to see.Why are experts (along with us nonexperts) so bad at making predictions? The world is a messy, complex and contingent place with countless intervening variables and confounding factors, which our brains are not equipped to evaluate. We evolved the capacity to make snap decisions based on short-term predictions, not rational analysis about long-term investments, and so we deceive ourselves into thinking that experts can foresee the future. This self-deception among professional prognosticators was investigated by University of California, Berkeley, professor Philip E. Tetlock, as reported in his 2005 book Expert Political Judgment. After testing 284 experts in political science, economics, history and journalism in a staggering 82,361 predictions about the future, Tetlock concluded that they did little better than a dart-throwing chimpanzee.There was one significant factor in greater prediction success, however, and that was cognitive style: foxes who know a little about many things do better than hedgehogs who know a lot about one area of expertise. Low scorers, Tetlock wrote, were thinkers who know one big thing, aggressively extend the explanatoryreach of that one big thing into new domains, display bristly impatience with those who do not get it, and express considerable confidence that they are already pretty proficient forecasters. High scorers in the study were thinkers who know many small things (tricks of their trade), are sceptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible ad hocery that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. Being deeply knowledgeable on one subject narrows focus and increases confidence but also blurs the value of dissenting views and transforms data collection into belief confirmation. One way to avoid being wrong is to be sceptical whenever you catch yourself making predictions based on reducing complex phenomena into one overarching scheme. This type of cognitive trap is why I dont make predictions and why I never will.Q. What does Paul Samuelsons statement Wall Street indexes predicted nine out of the last five recessions! imply?I. Wall Street indexes are too pessimistic in their predictions.II. Wall Street indexes need to be more optimistic about their predictions.III. Wall Street indexes are much better at predicting recessions than anyone realizes.IV. Wall Street indexes make plenty of extreme predictions, out of which only some come true.a)I IVb)Only IVc)Only IId)I, II IIICorrect answer is option 'A'. Can you explain this answer? tests, examples and also practice CAT tests.
Explore Courses for CAT exam

Top Courses for CAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev