CBSE Class 10  >  Class 10 Test  >  Artificial Intelligence  >  Test: Evaluating Models - Class 10 MCQ

Evaluating Models - Free MCQ Practice Test with solutions, Class 10 Artificial


MCQ Practice Test & Solutions: Test: Evaluating Models (20 Questions)

You can prepare effectively for Class 10 Artificial Intelligence for Class 10 with this dedicated MCQ Practice Test (available with solutions) on the important topic of "Test: Evaluating Models ". These 20 questions have been designed by the experts with the latest curriculum of Class 10 2026, to help you master the concept.

Test Highlights:

  • - Format: Multiple Choice Questions (MCQ)
  • - Duration: 30 minutes
  • - Number of Questions: 20

Sign up on EduRev for free to attempt this test and track your preparation progress.

Test: Evaluating Models - Question 1

What does “model evaluation” primarily aim to do?

Detailed Solution: Question 1

Evaluation is the process of using metrics (like accuracy, precision, recall, F1) to understand how well a model performs, its strengths/weaknesses, and where it needs improvement.

Test: Evaluating Models - Question 2

Which statement best describes the evaluation feedback loop?

Detailed Solution: Question 2

The chapter emphasizes iterative improvement: build a model, evaluate with metrics, refine, and continue until desired performance is reached.

Test: Evaluating Models - Question 3

Why do we perform a train–test split?

Detailed Solution: Question 3

Train–test split separates data for learning (train) and for objective evaluation (test), estimating performance on new, unseen data.

Test: Evaluating Models - Question 4

In the train–test setting, what is the main objective of using the test set?

Detailed Solution: Question 4

The test set is used after training to gauge how well the model generalizes to data it never saw during training.

Test: Evaluating Models - Question 5

Which pair correctly matches the confusion matrix terms?

Detailed Solution: Question 5

TP = correctly predicted positive; FN = actual positive predicted as negative (missed positive).

Test: Evaluating Models - Question 6

Given Actual House Price = 402k and Predicted = 391k, using Error = Actual − Predicted, Error Rate = Error/Actual, Accuracy = 1 − Error Rate, what is Accuracy (%)?

Detailed Solution: Question 6

Error = 402 − 391 = 11; Error Rate = 11/402 ≈ 0.0274; Accuracy = 1 − 0.0274 = 0.9726 ≈ 97.3%.

Test: Evaluating Models - Question 7

If Actual = 500 and Predicted = 465, what is the model’s accuracy (%) using the same formula set?

Detailed Solution: Question 7

Error = 500 − 465 = 35; Error Rate = 35/500 = 0.07; Accuracy = 1 − 0.07 = 0.93 → 93%.

Test: Evaluating Models - Question 8

Which formula for classification accuracy is correct?

Detailed Solution: Question 8

Accuracy measures all correct predictions: (TP + TN) divided by all predictions.

Test: Evaluating Models - Question 9

Which formula defines Precision?

Detailed Solution: Question 9

Precision measures correctness among positive predictions: TP divided by all predicted positives (TP + FP).

Test: Evaluating Models - Question 10

Which formula defines Recall?

Detailed Solution: Question 10

Recall measures how many actual positives were captured: TP divided by actual positives (TP + FN).

Test: Evaluating Models - Question 11

What does the F1 Score represent?

Detailed Solution: Question 11

F1 = 2PR/(P + R), the harmonic mean balancing precision and recall, especially useful when both matter.

Test: Evaluating Models - Question 12

When is plain accuracy most appropriate?

Detailed Solution: Question 12

Accuracy works best with balanced datasets and when FP and FN have comparable costs; otherwise use precision/recall/F1.

Test: Evaluating Models - Question 13

Using the mini example (TP = 4, FN = 2, FP = 2, TN = 2; total = 10), what is the accuracy?

Detailed Solution: Question 13

Accuracy = (TP + TN)/Total = (4 + 2)/10 = 6/10 = 60%.

Test: Evaluating Models - Question 14

For TP = 50, TN = 40, FP = 10, FN = 0, what is Precision (%)?

Detailed Solution: Question 14

Precision = TP/(TP + FP) = 50/60 = 0.8333 → 83.33%.

Test: Evaluating Models - Question 15

For the same case (TP = 50, TN = 40, FP = 10, FN = 0), what is Recall (%)?

Detailed Solution: Question 15

Recall = TP/(TP + FN) = 50/50 = 1.0 → 100%.

Test: Evaluating Models - Question 16

With Precision = 0.8333 and Recall = 1.0, what is the F1 Score (%)?

Detailed Solution: Question 16

F1 = 2PR/(P+R) = 2×0.8333×1 / (1.8333) ≈ 0.9091 → 90.91%.

Test: Evaluating Models - Question 17

Consider TP = 100, TN = 47, FP = 62, FN = 290. What is Accuracy (%) (rounded to two decimals)?

Detailed Solution: Question 17

Total = 100 + 47 + 62 + 290 = 499; Accuracy = (TP + TN)/Total = 147/499 ≈ 0.2946 → 29.46%.

Test: Evaluating Models - Question 18

In an imbalanced dataset where one class dominates, why can accuracy be misleading?

Detailed Solution: Question 18

If the model predicts only the majority class, accuracy can look high while performance on the minority class is poor—hence consider precision/recall/F1.

Test: Evaluating Models - Question 19

Which ethical concern focuses on people understanding and interpreting model decisions?

Detailed Solution: Question 19

Transparency ensures decision processes are explainable (e.g., explaining why a loan was denied), fostering trust.

Test: Evaluating Models - Question 20

Which option best illustrates “bias” as discussed?

Detailed Solution: Question 20

Bias is unfairness/discrimination in outcomes—e.g., a system favoring one group (like only showing products to male users), leading to unfair results and poorer business outcomes.

22 videos|69 docs|11 tests
Information about Test: Evaluating Models Page
In this test you can find the Exam questions for Test: Evaluating Models solved & explained in the simplest way possible. Besides giving Questions and answers for Test: Evaluating Models , EduRev gives you an ample number of Online tests for practice
Download as PDF