๐ง Chain of Thought Prompting
Chain of Thought (CoT) prompting is a technique where you ask the AI to show its reasoning step by step before giving a final answer. By adding a simple phrase like "Let's think step by step," you can dramatically improve the accuracy of responses โ especially for math, logic, and complex reasoning tasks.
Think of it like showing your work on a math test. When you write out each step, you're less likely to make mistakes. The same principle applies to AI.
Why This Mattersโ
Chain of Thought prompting is one of the most important breakthroughs in prompt engineering:
- Accuracy jumps dramatically โ Studies from Google Research showed that CoT prompting improved math problem accuracy from ~18% to ~57% on certain benchmarks
- Complex reasoning becomes possible โ Tasks that seem impossible for AI become solvable when you ask it to reason through them
- Errors become visible โ When the AI shows its work, you can spot exactly where it goes wrong
- It mimics human problem-solving โ Just like humans, AI performs better when it "thinks out loud"
Without CoT, the AI tries to jump straight to the answer. With CoT, it builds up to the answer through logical steps, reducing errors significantly.
How Chain of Thought Worksโ
When an LLM generates text, each token depends on previous tokens. By forcing the model to generate intermediate reasoning steps, you give it more "computation" between the question and the answer. Each step creates context that helps produce the next step correctly.
There are two main approaches:
Zero-Shot CoTโ
Simply add "Let's think step by step" to your prompt. No examples needed.
Few-Shot CoTโ
Provide examples that include the reasoning steps, then ask the AI to solve a new problem in the same way.
Prompt Exampleโ
You are a math tutor helping a student solve word problems.
Problem: A store sells apples for $2 each and oranges for $3 each.
Sarah buys 4 apples and 3 oranges. She pays with a $20 bill.
How much change does she get?
Let's think step by step:
1. First, calculate the cost of apples
2. Then, calculate the cost of oranges
3. Add them together for the total
4. Subtract from the amount paid
Show each step clearly.
โ Bad Exampleโ
Sarah buys 4 apples at $2 and 3 oranges at $3 with a $20 bill.
How much change?
Problem: The AI might jump to an answer without showing work, increasing the chance of errors. For simple problems it might get it right, but for complex ones, this approach fails often.
โ Improved Exampleโ
Sarah buys 4 apples at $2 each and 3 oranges at $3 each.
She pays with a $20 bill.
Let's solve this step by step:
Step 1: Cost of apples = 4 ร $2 = ?
Step 2: Cost of oranges = 3 ร $3 = ?
Step 3: Total cost = Step 1 + Step 2 = ?
Step 4: Change = $20 - Step 3 = ?
Show your work for each step.
Why it works: Forcing the AI to fill in each step means it performs the calculation at each point rather than trying to compute everything at once.
๐งช Try It Yourself
Edit the prompt and click Run to see the AI response.
Try this multi-step logic problem with CoT:
Problem: There are 3 boxes. Box A has only red balls. Box B has only blue balls. Box C has a mix of red and blue. All labels are wrong. You can pick one ball from one box. Which box do you pick from, and how does that tell you what's in all three boxes?
Write a prompt that uses chain of thought to solve this. Start with "Let's reason through this step by step."
Real-World Scenarioโ
Code Debugging with CoT:
Instead of asking "What's wrong with this code?", use CoT:
Here is a Python function that should return the factorial of a number,
but it gives wrong results:
def factorial(n):
result = 0
for i in range(1, n+1):
result *= i
return result
Let's debug this step by step:
1. Trace through the function with n=5
2. Track the value of 'result' at each iteration
3. Identify where the logic breaks down
4. Explain the fix
This approach helps the AI catch that result starts at 0, so every multiplication gives 0. CoT makes this obvious by tracing through the execution.
Q: What is Chain of Thought prompting and when would you use it?
A: Chain of Thought prompting asks the AI to show its reasoning step by step before giving a final answer. You use it when the task requires multi-step reasoning โ math, logic, analysis, debugging, or any problem where jumping straight to the answer leads to errors. The simplest form is appending "Let's think step by step" to your prompt. It works because each reasoning step creates context tokens that help the model produce more accurate subsequent steps. CoT is most valuable for complex tasks; for simple factual questions, it adds unnecessary length without improving accuracy.
- Chain of Thought = asking the AI to reason step by step before answering
- Add "Let's think step by step" for zero-shot CoT
- Provide worked examples for few-shot CoT
- Dramatically improves math, logic, and multi-step reasoning
- Makes errors visible and debuggable
- Best for complex tasks โ skip it for simple factual questions
- Each intermediate step gives the model more computation and context