This lesson teaches you how to use AI as a daily feedback partner to improve your skills, habits, and work quality over time. You'll learn how to write prompts that give you honest, actionable feedback on your performance, and how to turn that feedback into a consistent improvement loop. Instead of waiting for a manager, teacher, or mentor to review your work once in a while, you'll be able to get personalized coaching from AI whenever you need it.
The key is learning to structure your prompts so the AI acts like a skilled coach - not just praising you or being vague, but pointing out specific weaknesses and suggesting concrete next steps. By the end of this lesson, you'll know how to create feedback prompts that help you grow every single day.
When people first try to get feedback from AI, they usually ask something like "Is this good?" or "What do you think of my work?" These prompts are too general. The AI doesn't know what standards to use, what your goals are, or what level you're at. So it gives bland, generic responses like "This is great! Maybe add more detail." That kind of feedback doesn't help you improve.
To get feedback that actually makes you better, your prompt needs to include:
Let's see how this works in practice.
Priya is preparing for customer service job interviews. She's practicing her answer to the common question: "Tell me about a time you dealt with a difficult customer." She wants to get better at answering this question, but she doesn't have anyone to practice with.
Priya writes her practice answer and asks the AI:
"Is this a good answer to the difficult customer question?"
The AI responds with something like: "Yes, this is a good answer! You showed empathy and problem-solving. Maybe add a bit more detail about the outcome."
This feedback is too vague. Priya doesn't know which parts were strong, which parts were weak, or exactly what to change. She can't tell if her answer would actually impress an interviewer.
Priya uses a structured feedback prompt:
"I'm practicing for customer service job interviews. I've written an answer to 'Tell me about a time you dealt with a difficult customer.' Please evaluate my answer using the STAR method (Situation, Task, Action, Result). Point out:
1. What's missing or unclear in each part of STAR
2. Whether my answer shows the key skills interviewers look for: empathy, problem-solving, and staying calm under pressure
3. One specific thing I should change to make this answer stronger
Be honest and direct. Here's my answer:
[Her practice answer]"
Now the AI gives her targeted feedback:
"Situation: Clear - customer received wrong item.
Task: Missing - you didn't explain what your responsibility was or what the customer expected from you.
Action: Good detail about listening and offering solutions, but you didn't mention how you stayed calm when the customer raised their voice.
Result: You said the customer was happy, but you didn't include any measurable outcome. Did they leave a review? Did they come back?
Key skills: Empathy comes through well. Problem-solving is there but could be stronger if you explained why you chose that particular solution.
One change: Add one sentence in the Task section explaining what your goal was - 'My task was to resolve the issue quickly while keeping the customer's trust.' This frames your actions better."
The structured prompt gave the AI clear criteria to evaluate against (STAR method and key skills). It asked for specific gaps, not just general praise. It requested one concrete change, making the feedback immediately actionable. Priya now knows exactly what to fix and can practice again with better structure.
Marcus runs a small bakery and often emails suppliers to negotiate better prices or resolve order issues. He feels like his emails don't get great responses - suppliers sometimes ignore him or say no without discussion. He wants to improve his email writing but doesn't know what he's doing wrong.
Marcus pastes one of his emails into ChatGPT and asks:
"Can you check this email for me?"
The AI corrects a few grammar mistakes and says the email is fine. Marcus still doesn't understand why his emails aren't getting better results.
Marcus creates a feedback prompt with clear context and evaluation criteria:
"I'm a small bakery owner writing to a flour supplier to negotiate a bulk discount. I want you to evaluate this email based on:
1. Clarity: Is my request clear and specific?
2. Tone: Do I sound professional but approachable? Am I too pushy or too passive?
3. Persuasion: Do I give the supplier a good reason to say yes?
4. Structure: Is the email easy to read and does it get to the point quickly?
Rate each area out of 5 and tell me the one change that would most improve my chances of getting a yes. Here's my email:
[His email text]"
The AI responds:
"Clarity: 3/5 - You ask for 'a better price' but don't specify what discount percentage you're hoping for or what quantity you're committing to.
Tone: 2/5 - The phrase 'I really need you to help me out here' sounds desperate rather than professional. It puts you in a weaker negotiating position.
Persuasion: 2/5 - You mention you've been a customer for six months, but you don't explain how a discount benefits the supplier (e.g., guaranteed larger orders, long-term partnership).
Structure: 4/5 - Email is short and readable. Good.
One key change: Replace 'I really need you to help me out here' with a specific proposal: 'If you can offer a 10% discount on orders of 50kg or more, I'd be happy to commit to monthly orders of that size for the next six months.' This is concrete, professional, and shows the supplier what they gain."
Marcus didn't just ask "Is this good?" He told the AI what success looks like for this email (clear request, professional tone, persuasive, well-structured). The AI evaluated each dimension separately and scored them, making weaknesses obvious. The single actionable change gave Marcus a clear rewrite path. He can now apply this same feedback structure to future emails and track his improvement over time.
Lena is writing an essay for her sociology class on the impact of social media on mental health. She's written a paragraph arguing that social media increases anxiety, but her professor's past feedback said her arguments "lack depth." She wants to strengthen her reasoning before submitting.
Lena pastes her paragraph and asks:
"Is this argument strong enough?"
The AI says: "Your argument is reasonable. You might want to add more evidence."
Lena doesn't know what kind of evidence, how much, or whether the problem is evidence or something else entirely.
Lena uses a detailed feedback prompt based on academic argument standards:
"I'm a second-year sociology student writing an essay on social media and mental health. Please evaluate the strength of the argument in this paragraph using these criteria:
1. Claim clarity: Is my main point clear and specific?
2. Evidence quality: Do I use credible sources? Is the evidence directly relevant?
3. Reasoning: Do I explain how the evidence supports my claim, or do I just state facts?
4. Counterarguments: Do I acknowledge any opposing views or limitations?
Identify the weakest part of my argument and suggest one concrete way to strengthen it. Here's my paragraph:
[Her paragraph]"
The AI provides targeted feedback:
"Claim clarity: Good - you clearly state that social media increases anxiety in teenagers.
Evidence quality: Moderate - you cite one study, which is good, but you don't mention the sample size, methodology, or whether other studies support this finding.
Reasoning: Weak - you present the study's finding but don't explain the mechanism. Why does social media increase anxiety? What's the psychological or social process?
Counterarguments: Missing - you don't address the fact that some research shows social media can reduce anxiety by providing social support.
Weakest part: Your reasoning. You're stating facts without explaining causation.
One concrete fix: After citing the study, add 2-3 sentences explaining how constant comparison to others' curated lives triggers feelings of inadequacy, which research links to anxiety symptoms. This shows you understand the 'why,' not just the 'what.'"
Lena gave the AI specific academic standards to evaluate against. Instead of vague praise or general suggestions, she got a breakdown of exactly where her argument was weak (reasoning, not evidence). The concrete fix told her what to add and why. She can now use this same four-part framework to self-check every paragraph before submitting future essays.
The examples above show one-time feedback, but the real power comes from making this a daily practice. Here's how to build a feedback loop that leads to continuous improvement:
Every good feedback prompt includes these elements:
You've been tracking your spending for a month and you've written a short reflection on where your money goes and where you could save. You want AI feedback on whether your savings plan is realistic and well-thought-out. Write a feedback prompt that asks the AI to evaluate your plan based on specific criteria (like whether you've identified your biggest expense categories, whether your savings targets are achievable, and whether you've planned for unexpected costs). Include what context the AI needs to know about your situation.
You've created a project status report that you send to your manager every Friday. You want to make sure it's clear, concise, and highlights the right information. Your manager has said in the past that she's very busy and needs reports that let her see key updates at a glance. Write a feedback prompt that will help you improve this weekly report. Think about what criteria matter most for a busy manager reading a status update.
You're preparing a 5-minute presentation for your biology class on vaccine development. You've written out what you plan to say in each section. You want feedback on whether your explanations are clear for classmates who aren't biology experts, whether your structure makes sense, and whether you're staying within the time limit. Create a feedback prompt that gives the AI enough context and clear evaluation standards to give you useful, specific feedback.