been working on improving my prompting skills, and this is one of the most important lessons I’ve learnt so far:
The way you talk to AI may steer it in a certain direction that does not benefit the quality of your answers. Maybe more than you think (more than I realised, for sure).
In this article, I’ll explain how you can unconsciously introduce bias into your prompts, why this is problematic (because it affects the quality of your answers), and, most importantly: what you can do about it, so you can get better results from AI.
Bias in AI
Apart from the biases that are already present in some AI models (due to the training data used), such as demographic bias (e.g., a model that associates ‘kitchens’ more often with women than men), cultural bias (the model associates ‘holidays’ more readily with Christmas rather than Diwali or Ramadan), or language bias (a model performs better in certain languages, usually English), you also influence the skew of the answers you get.
Yes, through your prompt. A single word in your question can be enough to set the model down a particular path.
What is (prompt) bias?
Bias is a distortion in the way a model processes or prioritises information, creating systematic skewing.
In the context of AI prompting, it involves giving subtle signals to the model that ‘colour’ the answer. Often, without you being aware of it.
Why is it a problem?
AI systems are increasingly used for decision-making, analysis, and creation. In that context, quality matters. Bias can reduce that quality.
The risks of unconscious bias:
- You get a less nuanced or even incorrect answer
- You (unconsciously) repeat your own prejudices
- You miss relevant perspectives or nuance
- In professional contexts (journalism, research, policy), it can damage your credibility
When are you at risk?
TL;DR: always, but it becomes especially visible when you use few-shot prompting.
Long version: The risk of bias exists whenever you give an AI model a prompt, simply because every word, every sequence, and every example carries something of your intention, background, or expectation.
With few-shot prompting (where you provide examples for the model to mirror), the risk of bias is more visible because you give examples that the model mirrors. The order of those examples, the distribution of labels, and even small formatting differences can influence the answer.
(I’ve based all bias risks in this article on the top 5 most common prompting methods, currently: instruction, zero-shot, few-shot, chain of thought, and role-based prompting.)
Common biases in few-shot prompting
Which biases commonly occur in few-shot prompting, and what do they involve?
Majority label bias
- The issue: Model more often chooses the most common label in your examples.
- Example: If 3 of your 4 examples have “yes” as an answer, the model will more readily predict “yes”.
- Solution: Balance labels.
Selection bias
- The issue: Examples or context aren’t representative.
- Example: All your examples are about tech startups, so the model sticks to that context.
- Solution: Vary/balance examples.
Anchoring bias
- The issue: First example or statement determines the output direction too much.
- Example: If the first example describes something as “cheap and unreliable”, the model may treat similar items as low quality, regardless of later examples.
- Solution: Start neutrally. Vary order. Explicitly ask for reassessment.
Recency bias
- The issue: Model attaches more value to the last example in a prompt.
- Example: The answer resembles the example mentioned last.
- Solution: Rotate examples/reformulate questions in new turns.
Formatting bias
- The issue: Formatting differences influence outcome: layout (e.g., bold) affects attention and choice.
- Example: A bold label is chosen more often than one without formatting.
- Solution: Keep formatting consistent.
Positional bias
- The issue: Answers at the beginning or end of a list are chosen more often.
- Example: In multiple-choice questions, the model more often chooses A or D.
- Solution: Switch order of options.
Other biases in different prompting methods
Bias can also occur in situations other than few-shot prompting. Even with zero-shot (without examples), one-shot (1 example), or in AI agents you are building, you can cause biases.
Instruction bias
Instruction prompting is the most commonly used method at the moment (according to ChatGPT). If you explicitly give the model a style, tone, or role (“Write an argument against vaccination”), this can reinforce bias. The model then tries to fulfil the assignment, even if the content isn’t factual or balanced.
How to prevent: ensure balanced, nuanced instructions. Use neutral wording. Explicitly ask for multiple perspectives.
- Not so good: “Write as an experienced investor why cryptocurrency is the future”.
- Better: “Analyse as an experienced investor the advantages and disadvantages of cryptocurrency”.
Confirmation bias
Even when you don’t provide examples, your phrasing can already steer in a certain direction.
How to prevent: avoid leading questions.
- Not so good: “Why is cycling without a helmet dangerous?” → “Why is X dangerous?” leads to a confirmatory answer, even if that’s not factually correct.
- Better: “What are the risks and benefits of cycling without a helmet?”
- Even better: “Analyse the safety aspects of cycling with and without helmets, including counter-arguments”.
Framing bias
Similar to confirmation bias, but different. With framing bias, you influence the AI through how you present the question or information. The phrasing or context steers interpretation and the answer in a particular direction, often unconsciously.
How to prevent: Use neutral or balanced framing.
- Not so good: “How dangerous is cycling without a helmet?” → Here the emphasis is on danger, so the answer will likely mainly mention risks.
- Better: “What are people’s experiences of cycling without a helmet?”
- Even better: “What are people’s experiences of cycling without a helmet? Mention all positive and all negative experiences”.
Follow-up bias
Previous answers influence subsequent ones in a multi-turn conversation. With follow-up bias, the model adopts the tone, assumptions, or framing of your earlier input, especially in multi-turn conversations. The answer seems to want to please you or follows the logic of the previous turn, even if that was coloured or wrong.
Example scenario:
You: “That new marketing strategy seems risky to me”
AI: “You’re right, there are indeed risks…”
You: “What are other options?”
AI: [Will likely mainly suggest safe, conservative options]
How to prevent: Ensure neutral questions, ask for a counter-voice, put the model in a role.
Compounding bias
Particularly with Chain-of-Thought (CoT) Prompting (asking the model to reason step by step before giving an answer), prompt chaining (AI models generating prompts for other models), or deploying more complex workflows like agents, bias can accumulate over multiple steps in a prompt or interaction chain: compounding bias.
How to prevent: Evaluate intermediately, break the chain, red teaming.
Checklist: How to reduce bias in your prompts
Bias isn’t always avoidable, but you can definitely learn to recognise and limit it. These are some practical tips to reduce bias in your prompts.

1. Check your phrasing
Avoid leading the witness, avoid questions that already lean in a direction, “Why is X better?” → “What are the advantages and disadvantages of X?”
2. Mind your examples
Using few-shot prompting? Ensure labels are balanced. Also vary the order occasionally.
3. Use more neutral prompts
For example: give the model an empty field (“N/A”) as a possible outcome. This calibrates its expectations.
4. Ask for reasoning
Have the model explain how it reached its answer. This is called ‘chain-of-thought prompting’ and helps make blind assumptions visible.
5. Experiment!
Ask the same question in multiple ways and compare answers. Only then do you see how much influence your phrasing has.
Conclusion
In short, bias is always a risk when prompting, through how you ask, what you ask, and when you ask it during a series of interactions. I believe this should be a constant point of attention whenever you use LLMs.
I’m going to keep experimenting, varying my phrasing, and staying critical of my prompts to get the most out of AI without falling into the traps of bias.
I’m excited to keep improving my prompting skills. Got any tips or advice on prompting you’d like to share? Please do! 🙂
Hi, I’m Daphne from DAPPER works. Liked this article? Feel free to share it!