CBSE Class 10  >  Class 10 Notes  >  Artificial Intelligence  >  CBSE Textbook: Evaluating Models

CBSE Textbook: Evaluating Models

Download, print and study this document offline
Please wait while the PDF view is loading
 Page 1


                   
Introduction 
Till now we have learnt about the 4 stages of AI project cycle, viz. Problem scoping, Data 
acquisition, Data exploration and modelling. While in modelling we can make different types 
of models, how do we check if one’s better than the other? That’s where Evaluation comes 
into play. In the Evaluation stage, we will explore different methods of evaluating an AI model. 
Model Evaluation is an integral part of the model development process. It helps to find the 
best model that represents our data and how well the chosen model will work in the future 
 
3.1: Importance of Model Evaluation 
What is evaluation? 
 
? Model evaluation is the process of using different 
evaluation metrics to understand a machine learning 
model’s performance 
? An AI model gets better with constructive feedback 
? You build a model, get feedback from metrics, make 
improvements and continue until you achieve a 
desirable accuracy 
 
 
 
 
 
 
• It’s like the report card of your school 
• There are many parameters like grades, percentage, 
percentiles, ranks 
• Your academic performance gets evaluated and you know 
where to work more to get better 
Page 2


                   
Introduction 
Till now we have learnt about the 4 stages of AI project cycle, viz. Problem scoping, Data 
acquisition, Data exploration and modelling. While in modelling we can make different types 
of models, how do we check if one’s better than the other? That’s where Evaluation comes 
into play. In the Evaluation stage, we will explore different methods of evaluating an AI model. 
Model Evaluation is an integral part of the model development process. It helps to find the 
best model that represents our data and how well the chosen model will work in the future 
 
3.1: Importance of Model Evaluation 
What is evaluation? 
 
? Model evaluation is the process of using different 
evaluation metrics to understand a machine learning 
model’s performance 
? An AI model gets better with constructive feedback 
? You build a model, get feedback from metrics, make 
improvements and continue until you achieve a 
desirable accuracy 
 
 
 
 
 
 
• It’s like the report card of your school 
• There are many parameters like grades, percentage, 
percentiles, ranks 
• Your academic performance gets evaluated and you know 
where to work more to get better 
                  
 
Need of model evaluation 
In essence, model evaluation is like giving your AI model a report card. It helps you understand its 
strengths, weaknesses, and suitability for the task at hand. This feedback loop is essential for 
building trustworthy and reliable AI systems. 
After understanding the need for Model Evaluation, let’s know how to begin with the process. 
There can be different Evaluation techniques, depending of the type and purpose of the model. 
 
 
3.2: Splitting the training set data for Evaluation 
Train-test split 
? The train-test split is a technique for evaluating the performance of a machine learning 
algorithm 
? It can be used for any supervised learning algorithm 
? The procedure involves taking a dataset and dividing it into two subsets: The training 
dataset and the testing dataset 
? The train-test procedure is appropriate when there is a sufficiently large dataset available 
 
 
 
Page 3


                   
Introduction 
Till now we have learnt about the 4 stages of AI project cycle, viz. Problem scoping, Data 
acquisition, Data exploration and modelling. While in modelling we can make different types 
of models, how do we check if one’s better than the other? That’s where Evaluation comes 
into play. In the Evaluation stage, we will explore different methods of evaluating an AI model. 
Model Evaluation is an integral part of the model development process. It helps to find the 
best model that represents our data and how well the chosen model will work in the future 
 
3.1: Importance of Model Evaluation 
What is evaluation? 
 
? Model evaluation is the process of using different 
evaluation metrics to understand a machine learning 
model’s performance 
? An AI model gets better with constructive feedback 
? You build a model, get feedback from metrics, make 
improvements and continue until you achieve a 
desirable accuracy 
 
 
 
 
 
 
• It’s like the report card of your school 
• There are many parameters like grades, percentage, 
percentiles, ranks 
• Your academic performance gets evaluated and you know 
where to work more to get better 
                  
 
Need of model evaluation 
In essence, model evaluation is like giving your AI model a report card. It helps you understand its 
strengths, weaknesses, and suitability for the task at hand. This feedback loop is essential for 
building trustworthy and reliable AI systems. 
After understanding the need for Model Evaluation, let’s know how to begin with the process. 
There can be different Evaluation techniques, depending of the type and purpose of the model. 
 
 
3.2: Splitting the training set data for Evaluation 
Train-test split 
? The train-test split is a technique for evaluating the performance of a machine learning 
algorithm 
? It can be used for any supervised learning algorithm 
? The procedure involves taking a dataset and dividing it into two subsets: The training 
dataset and the testing dataset 
? The train-test procedure is appropriate when there is a sufficiently large dataset available 
 
 
 
                  
 
Need of Train-test split 
? The train dataset is used to make the model learn 
? The input elements of the test dataset are provided to the trained model. The model makes 
predictions, and the predicted values are compared to the expected values 
? The objective is to estimate the performance of the machine learning model on new data: 
data not used to train the model 
 
This is how we expect to use the model in practice. Namely, to fit it on available data with known 
inputs and outputs, then make predictions on new examples in the future where we do not 
have the expected output or target values. 
 
Remember that It’s not recommended to use the data we used to build the model to evaluate 
it. This is because our model will simply remember the whole training set, and will therefore 
always predict the correct label for any point in the training set. This is known as overfitting. 
 
 
3.3: Accuracy and Error 
? Bob and Billy went to a concert 
? Bob brought Rs 300 and Billy brought Rs 550 as 
the entry fee for that 
? The entry fee per person was Rs 500 
? Can you tell: 
? Who is more accurate? Bob or Billy? 
? How much is the error for both Bob and Billy in estimating the concert entry fee? 
 
 
You will learn more about the concepts including train test split and cross validation in 
higher classes. 
Page 4


                   
Introduction 
Till now we have learnt about the 4 stages of AI project cycle, viz. Problem scoping, Data 
acquisition, Data exploration and modelling. While in modelling we can make different types 
of models, how do we check if one’s better than the other? That’s where Evaluation comes 
into play. In the Evaluation stage, we will explore different methods of evaluating an AI model. 
Model Evaluation is an integral part of the model development process. It helps to find the 
best model that represents our data and how well the chosen model will work in the future 
 
3.1: Importance of Model Evaluation 
What is evaluation? 
 
? Model evaluation is the process of using different 
evaluation metrics to understand a machine learning 
model’s performance 
? An AI model gets better with constructive feedback 
? You build a model, get feedback from metrics, make 
improvements and continue until you achieve a 
desirable accuracy 
 
 
 
 
 
 
• It’s like the report card of your school 
• There are many parameters like grades, percentage, 
percentiles, ranks 
• Your academic performance gets evaluated and you know 
where to work more to get better 
                  
 
Need of model evaluation 
In essence, model evaluation is like giving your AI model a report card. It helps you understand its 
strengths, weaknesses, and suitability for the task at hand. This feedback loop is essential for 
building trustworthy and reliable AI systems. 
After understanding the need for Model Evaluation, let’s know how to begin with the process. 
There can be different Evaluation techniques, depending of the type and purpose of the model. 
 
 
3.2: Splitting the training set data for Evaluation 
Train-test split 
? The train-test split is a technique for evaluating the performance of a machine learning 
algorithm 
? It can be used for any supervised learning algorithm 
? The procedure involves taking a dataset and dividing it into two subsets: The training 
dataset and the testing dataset 
? The train-test procedure is appropriate when there is a sufficiently large dataset available 
 
 
 
                  
 
Need of Train-test split 
? The train dataset is used to make the model learn 
? The input elements of the test dataset are provided to the trained model. The model makes 
predictions, and the predicted values are compared to the expected values 
? The objective is to estimate the performance of the machine learning model on new data: 
data not used to train the model 
 
This is how we expect to use the model in practice. Namely, to fit it on available data with known 
inputs and outputs, then make predictions on new examples in the future where we do not 
have the expected output or target values. 
 
Remember that It’s not recommended to use the data we used to build the model to evaluate 
it. This is because our model will simply remember the whole training set, and will therefore 
always predict the correct label for any point in the training set. This is known as overfitting. 
 
 
3.3: Accuracy and Error 
? Bob and Billy went to a concert 
? Bob brought Rs 300 and Billy brought Rs 550 as 
the entry fee for that 
? The entry fee per person was Rs 500 
? Can you tell: 
? Who is more accurate? Bob or Billy? 
? How much is the error for both Bob and Billy in estimating the concert entry fee? 
 
 
You will learn more about the concepts including train test split and cross validation in 
higher classes. 
                  
 
Accuracy 
? Accuracy is an evaluation metric that allows you to measure the total number of 
predictions a model gets right. 
? The accuracy of the model and performance of the model is directly proportional, and 
hence better the performance of the model, the more accurate are the predictions. 
Error 
? Error can be described as an action that is inaccurate or wrong. 
? In Machine Learning, the error is used to see how accurately our model can predict data it 
uses to learn new, unseen data. 
? Based on our error, we choose the machine learning model which performs best for a 
particular dataset. 
 
 
Error refers to the difference between a model's prediction and the actual outcome. It quantifies how often 
the model makes mistakes. 
 
 
Imagine you're training a model to predict if you have a certain disease (classification task). 
• Error: If the model predicts you don’t have a disease but you actually have a disease, that's 
an error. The error quantifies how far off the prediction was from reality. 
• Accuracy: If the model correctly predicts disease or no disease for a particular period, it  
has 100% accuracy for that period. 
 
 
Page 5


                   
Introduction 
Till now we have learnt about the 4 stages of AI project cycle, viz. Problem scoping, Data 
acquisition, Data exploration and modelling. While in modelling we can make different types 
of models, how do we check if one’s better than the other? That’s where Evaluation comes 
into play. In the Evaluation stage, we will explore different methods of evaluating an AI model. 
Model Evaluation is an integral part of the model development process. It helps to find the 
best model that represents our data and how well the chosen model will work in the future 
 
3.1: Importance of Model Evaluation 
What is evaluation? 
 
? Model evaluation is the process of using different 
evaluation metrics to understand a machine learning 
model’s performance 
? An AI model gets better with constructive feedback 
? You build a model, get feedback from metrics, make 
improvements and continue until you achieve a 
desirable accuracy 
 
 
 
 
 
 
• It’s like the report card of your school 
• There are many parameters like grades, percentage, 
percentiles, ranks 
• Your academic performance gets evaluated and you know 
where to work more to get better 
                  
 
Need of model evaluation 
In essence, model evaluation is like giving your AI model a report card. It helps you understand its 
strengths, weaknesses, and suitability for the task at hand. This feedback loop is essential for 
building trustworthy and reliable AI systems. 
After understanding the need for Model Evaluation, let’s know how to begin with the process. 
There can be different Evaluation techniques, depending of the type and purpose of the model. 
 
 
3.2: Splitting the training set data for Evaluation 
Train-test split 
? The train-test split is a technique for evaluating the performance of a machine learning 
algorithm 
? It can be used for any supervised learning algorithm 
? The procedure involves taking a dataset and dividing it into two subsets: The training 
dataset and the testing dataset 
? The train-test procedure is appropriate when there is a sufficiently large dataset available 
 
 
 
                  
 
Need of Train-test split 
? The train dataset is used to make the model learn 
? The input elements of the test dataset are provided to the trained model. The model makes 
predictions, and the predicted values are compared to the expected values 
? The objective is to estimate the performance of the machine learning model on new data: 
data not used to train the model 
 
This is how we expect to use the model in practice. Namely, to fit it on available data with known 
inputs and outputs, then make predictions on new examples in the future where we do not 
have the expected output or target values. 
 
Remember that It’s not recommended to use the data we used to build the model to evaluate 
it. This is because our model will simply remember the whole training set, and will therefore 
always predict the correct label for any point in the training set. This is known as overfitting. 
 
 
3.3: Accuracy and Error 
? Bob and Billy went to a concert 
? Bob brought Rs 300 and Billy brought Rs 550 as 
the entry fee for that 
? The entry fee per person was Rs 500 
? Can you tell: 
? Who is more accurate? Bob or Billy? 
? How much is the error for both Bob and Billy in estimating the concert entry fee? 
 
 
You will learn more about the concepts including train test split and cross validation in 
higher classes. 
                  
 
Accuracy 
? Accuracy is an evaluation metric that allows you to measure the total number of 
predictions a model gets right. 
? The accuracy of the model and performance of the model is directly proportional, and 
hence better the performance of the model, the more accurate are the predictions. 
Error 
? Error can be described as an action that is inaccurate or wrong. 
? In Machine Learning, the error is used to see how accurately our model can predict data it 
uses to learn new, unseen data. 
? Based on our error, we choose the machine learning model which performs best for a 
particular dataset. 
 
 
Error refers to the difference between a model's prediction and the actual outcome. It quantifies how often 
the model makes mistakes. 
 
 
Imagine you're training a model to predict if you have a certain disease (classification task). 
• Error: If the model predicts you don’t have a disease but you actually have a disease, that's 
an error. The error quantifies how far off the prediction was from reality. 
• Accuracy: If the model correctly predicts disease or no disease for a particular period, it  
has 100% accuracy for that period. 
 
 
                  
 
Key Points: 
• Here the goal is to minimize error and maximize accuracy. 
• Real-world data can be messy, and even the best models make mistakes. 
• Sometimes, focusing solely on accuracy might not be ideal. For instance, in medical 
diagnosis, a model with slightly lower accuracy but a strong focus on avoiding incorrectly 
identifying a healthy person as sick might be preferable. 
• Choosing the right error or accuracy metric depends on the specific task and its 
requirements. 
Understanding both error and accuracy is crucial for effectively evaluating and improving AI 
models. 
Activity 1: Find the accuracy of the AI model 
 
 
Calculate the accuracy of the House Price prediction AI model 
• Read the instructions and fill in the blank cells in the table. 
• The formula for finding error and accuracy is shown in the table 
• Accuracy of the AI model is the mean accuracy of all five samples 
• Percentage accuracy can be seen by multiplying the accuracy with 100 
 
Predicted House 
Price (USD) 
Actual House 
Price (USD) 
Error Abs 
(Actual-Predicted) 
Error Rate 
(Error/Actual) 
Accuracy 
(1-Error rate) 
Accuracy% 
(Accuracy*100)% 
391k 402k Abs (402k-391k)= 
11k 
11k/402k=0.027 1-0.027= 0.973 0.973*100%= 97.3% 
453k 488k 
    
125k 97k 
    
871k 907k 
    
322k 425k 
    
 
 
*Abs means the absolute value, which means only the magnitude of the difference without any 
negative sign (if any)  
The Model Evaluation stands on the two pillars of accuracy and error. Let’s understand some 
more metrics standing on these two pillars. 
 
 
 
 
Purpose: To understand how to calculate the error and the accuracy. 
Say: “The youth will understand the concept of accuracy and error and practice it 
mathematically.” 
Read More

FAQs on CBSE Textbook: Evaluating Models

1. What are the key components to evaluate when assessing a model in a scientific context?
Ans. When evaluating a model, key components include accuracy, reliability, validity, and applicability. Accuracy refers to how close the model's predictions are to actual outcomes. Reliability assesses the consistency of the model's results over repeated trials. Validity checks if the model measures what it is intended to measure. Lastly, applicability examines how well the model can be used in real-world scenarios.
2. How can one determine if a model is valid?
Ans. To determine if a model is valid, one can conduct tests that compare the model's predictions against actual data. If the model consistently predicts outcomes within an acceptable range of error, it demonstrates validity. Additionally, peer reviews and expert evaluations can provide insights into the model's theoretical underpinnings and assumptions, further confirming its validity.
3. What is the significance of model reliability in experiments?
Ans. Model reliability is significant because it ensures that the model produces consistent results under the same conditions. A reliable model allows researchers to make dependable predictions and conclusions based on its outcomes. If a model is unreliable, it may lead to incorrect interpretations and decisions, undermining the credibility of the research.
4. In what ways can one improve the accuracy of a model?
Ans. To improve the accuracy of a model, one can refine the underlying assumptions, use more precise data, and incorporate advanced statistical methods. Additionally, validating the model against new data and continuously updating it based on feedback can enhance its performance. Performing sensitivity analysis to identify which variables most affect the outcomes can also help streamline accuracy improvements.
5. How does the context of the model affect its evaluation?
Ans. The context of the model significantly affects its evaluation as different fields may have varying standards for accuracy and reliability. For instance, a model used in climate science might prioritize long-term predictions, while a model in pharmacology may focus on short-term outcomes. Evaluators must consider the specific goals, constraints, and real-world applications of the model to assess its effectiveness appropriately.
Explore Courses for Class 10 exam
Related Searches
video lectures, CBSE Textbook: Evaluating Models, Extra Questions, Free, mock tests for examination, past year papers, CBSE Textbook: Evaluating Models, study material, CBSE Textbook: Evaluating Models, Previous Year Questions with Solutions, practice quizzes, Viva Questions, shortcuts and tricks, pdf , Sample Paper, Exam, Important questions, MCQs, Summary, ppt, Semester Notes, Objective type Questions;