CLAT Exam  >  CLAT Questions  >  Directions: Kindly read the passage carefully... Start Learning for Free
Directions: Kindly read the passage carefully and answer the questions given beside.
Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.
However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.
Q. What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?
  • a)
    Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.
  • b)
    Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.
  • c)
    These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.
  • d)
    ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.
Correct answer is option 'C'. Can you explain this answer?
Verified Answer
Directions: Kindly read the passage carefully and answer the questions...
The statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible is predicated on the assumption that these programs lack a profound comprehension of context, underlying concepts, and the real-world constraints that influence the accuracy and reliability of their responses. Essentially, they rely primarily on statistical patterns and probabilities to generate responses, without fully grasping the intricacies of language and the complexities of the real world. This assumption aligns with the passage's description of these programs as "a lumbering statistical engine for pattern matching" and their inability to discern what is possible from what is impossible.
Among the provided options, only Option C accurately captures this assumption. It emphasizes that these programs lack a deep understanding of context and underlying concepts, which is a pivotal factor in determining the accuracy and credibility of their responses. Conversely, the other options concentrate on different aspects of generative AI programs, such as their capacity to offer an impartial perspective (Option A), their ability to learn from extensive data (Option B), or their improvements in comprehending context and delivering precise responses (Option D).
Therefore, option C remains the correct answer.
View all questions of this test
Explore Courses for CLAT exam

Top Courses for CLAT

Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer?
Question Description
Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? for CLAT 2025 is part of CLAT preparation. The Question and answers have been prepared according to the CLAT exam syllabus. Information about Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? covers all topics & solutions for CLAT 2025 Exam. Find important definitions, questions, meanings, examples, exercises and tests below for Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer?.
Solutions for Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? in English & in Hindi are available as part of our courses for CLAT. Download more important topics, notes, lectures and mock test series for CLAT Exam by signing up for free.
Here you can find the meaning of Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? defined & explained in the simplest way possible. Besides giving the explanation of Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer?, a detailed solution for Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? has been provided alongside types of Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? theory, EduRev gives you an ample number of questions to practice Directions: Kindly read the passage carefully and answer the questions given beside.Behold GPT4— while ChatGPT continues to fascinate society, OpenAI has already unveiled its successor, even though no other generative AI could possibly capture the same level of public interest. Well, generative AIs are often termed “humanlike”. But would they ever reach the limits of human reasoning? It’s important to note that ChatGPT or its ilk is “a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question,” as summarised by Noam Chomsky, Ian Roberts, and Jeffrey Watumull in a fascinating recent piece in the New York Times. In contrast, the human mind “seeks not to infer brute correlations among data points but to create explanations,” these authors wrote. GPT4 passed a simulated bar exam with a score around the top 10 per cent of test takers, whereas GPT3.5’ s score was around the bottom 10 per cent, indicating an increase in capacity.However, when asked “Son of an actor, this American guitarist and rock singer released many songs and albums and toured with his band. His name is “Elvis” what?” GPT4 chose “Elvis Presley,” although he was not the son of an actor. Thus, GPT4 can still miss subtle details. Yet there is a more serious issue. A generative AI makes up information when it doesn’t know the exact answer —an issue widely known as “hallucination.” As OpenAI acknowledged, like earlier GPT models, GPT4 also “hallucinates” facts and makes reasoning errors, although it scores “40 per cent higher” than GPT3.5 on tests intended to measure hallucination. Yet “ChatGPT and similar programs,” according to Noam Chomsky and his co-authors, “are incapable of distinguishing the possible from the impossible.” A tree, an apple, gravity, and the ground are all physical concepts that an AI would not understand, although, in most cases, it would continue to explain how an apple would fall on the ground due to gravity with spectacular accuracy. But the AI’s lack of comprehension of the real world would remain. And when the exact answer is unknown, it would continue to assign possibilities to impossible without explanations.Q.What assumption underlies the statement that ChatGPT and similar programs are incapable of distinguishing the possible from the impossible?a)Generative AI programs can offer an impartial and unbiased viewpoint on intricate matters that may pose challenges for humans to assess, rendering them valuable tools in decision-making processes.b)Although ChatGPT and similar programs lack an intuitive comprehension of the world, they are designed to acquire knowledge from extensive data and generate answers based on probabilities.c)These programs lack a profound grasp of context, fundamental principles, and real-world constraints that impact the accuracy and credibility of their responses.d)ChatGPT and similar programs have demonstrated significant enhancements in their capacity to comprehend context and provide responses that are both pertinent and precise.Correct answer is option 'C'. Can you explain this answer? tests, examples and also practice CLAT tests.
Explore Courses for CLAT exam

Top Courses for CLAT

Explore Courses
Signup for Free!
Signup to see your scores go up within 7 days! Learn & Practice with 1000+ FREE Notes, Videos & Tests.
10M+ students study on EduRev