CLAT Exam  >  CLAT Questions  >  Directions: Kindly read the passage carefully... Start Learning for Free
Directions: Kindly read the passage carefully and answer the questions given beside.


Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.


However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.


Q. What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?

  • a)
    Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.

  • b)
    Generative AI gorges on data, while the human mind is data-independent.

  • c)
    Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.

  • d)
    Generative AI seeks explanations, while the human mind relies on brute correlations. 

Correct answer is option 'A'. Can you explain this answer?
Verified Answer
Directions: Kindly read the passage carefully and answer the questions...
The passage contrasts generative AI like ChatGPT with the human mind by highlighting that AI seeks to provide explanations based on statistical patterns and correlations in data, while the human mind aims to create explanations. It states that the human mind does not merely infer brute correlations among data points but seeks to create explanations. This is what option A correctly identifies as the key difference.
View all questions of this test
Explore Courses for CLAT exam

Similar CLAT Doubts

Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the term used in the passage to describe the issue of generative AI making up information when it lacks the exact answer?

Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.Which of the following statements most accurately summarizes the primary conclusion drawn in the passage?

Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the issue referred to as "hallucination" in the passage with regard to generative AI?

Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?

Read the passage and answer the question based on it.The humanities transmit, through time and across cultures, diverse expressions of the human condition, allowing us to contextualize, illuminate, and pass on an essential legacy of culture, history and heritage.I believe that social media poses a grave threat to the humanities because it lacks the depth, nuance and permanence that make genuine, meaningful interactions about the human condition possible.Everything that social media communication represents- immediacy, impermanence, collectivism- is contrary and harmful to the thoughtfulness, permanence and individualistic experiences necessary to humanities discourse. Social media is creating a hive mind, a group think that devalues the human condition in favor of the immediate, the marketable and the shallow. In social media, there is no difference between us and others; we look the same, we talk the same, we fill the same space. The real purpose of social media is to gauge measure and ultimately control the behavior of the crowd for marketing purposes. And as social media, and its values of pliable, identifiable collectives based on mutual interests, migrates from the Web to become more ubiquitous in our everyday lives--try attending a movie or buying a meal, the reductionist conversation that it engenders comes with it.The first negative impact that social media has on the humanities is a multiple-choice format and physical structure that allows only for a very limited, narrow type of communication. There is no room for individual creativity or representation. Humanities also require background and context to impart ideas but social media is an equivalency and framework vacuum that decontextualizes and trivializes information in a way that renders it nearly meaningless. The brevity of communication through social media precludes explanation and circumstance.Within social media, all information is equally important. There are no little or big facts; all data is expressed in compact bites of equal weight. The inability to separate the trivial from the significant leaves us unable to glean consequential substance from what we are saying to each other: the very purpose of the humanities.Lastly, social media creates and archives no history. The humanities are about expanding, describing, understanding and transmitting through the generations, the human condition. The purpose of social media is to understand ever larger groups of people at the expense of the individual. Humanities is exactly the opposite: understanding the individual for the sake of the masses.As human beings, our only real method of connection is through authentic communication. Studies show that only 7% of communication is based on the written or verbal word. A whopping 93% is based on nonverbal body language. This is where social media gets dicey. Every relevant metric shows that we are interacting at breakneck speed and frequency through social media. But are we really communicating? With 93% of our communication context stripped away, we are now attempting to forge relationships and make decisions based on phrases, Abbreviations, Snippets, Emoticons, and which may or may not be accurate representations of the truth. In an ironic twist, social media has the potential to make us less social; a surrogate for the real thing. For it to be a truly effective communication vehicle, all parties bear a responsibility to be genuine, accurate, and not allow it to replace human contact altogether. In the workplace, the use of electronic communication has overtaken face-to-face and voice-to-voice communication by a wide margin. With these two trends at play, leaders must consider the impact on business relationships and the ability to effectively collaborate, build trust, and create employee engagement and loyalty.Q.What does the author mean by ‘reductionist conversation’?

Top Courses for CLAT

Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer?
Question Description
Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? for CLAT 2025 is part of CLAT preparation. The Question and answers have been prepared according to the CLAT exam syllabus. Information about Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? covers all topics & solutions for CLAT 2025 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer?.
Solutions for Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? in English & in Hindi are available as part of our courses for CLAT. Download more important topics, notes, lectures and mock test series for CLAT Exam by signing up for free.
Here you can find the meaning of Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer?, a detailed solution for Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? has been provided alongside types of Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What is the key difference highlighted in the passage between generative AI like ChatGPT and the human mind?a)Generative AI lacks comprehension of the real world, whereas the human mind understands physical concepts.b)Generative AI gorges on data, while the human mind is data-independent.c)Generative AI is capable of distinguishing the possible from the impossible, while the human mind cannot.d)Generative AI seeks explanations, while the human mind relies on brute correlations.Correct answer is option 'A'. Can you explain this answer? tests, examples and also practice CLAT tests.
Explore Courses for CLAT exam

Top Courses for CLAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev