Back to all writing
Post
November 10, 2024·7 min read

Prompt Engineering Patterns That Actually Work

After writing thousands of prompts, these are the patterns that consistently improve results. No magic tricks—just engineering principles.

Prompt EngineeringLLMBest Practices

The Reality of Prompt Engineering

Prompt engineering has a bad reputation. It sounds like "we'll just ask nicely and hope it works."

In reality, effective prompting is about reducing ambiguity and giving the model the information it needs. These patterns work because they're systematic, not because they're magic.

Here are the patterns I use daily.

Pattern 1: Role + Context + Task + Format

The most reliable prompt structure:

[ROLE]
You are a [specific role] with [specific expertise].

[CONTEXT]
Background information: [relevant context]
Current situation: [what's happening]

[TASK]
Your job is to [specific action].

[FORMAT]
Respond in [specific format].

Example:

You are a senior software engineer reviewing code for production readiness.

Background: This code will handle payment processing for an e-commerce platform.
The system processes ~1000 transactions per hour during peak times.

Review the following code for:
1. Security vulnerabilities
2. Performance issues
3. Error handling gaps

Respond with a numbered list of issues, each with:
- Issue description
- Severity (Critical/High/Medium/Low)
- Suggested fix

Why it works:

  • Role sets the expertise level and perspective
  • Context provides domain-specific information
  • Task is unambiguous and specific
  • Format makes output predictable and parseable

Pattern 2: Few-Shot Examples

Show, don't tell:

Convert the following descriptions to SQL queries.

Example 1:
Description: Find all users who signed up in 2024
SQL: SELECT * FROM users WHERE created_at >= '2024-01-01'

Example 2:
Description: Count orders by status
SQL: SELECT status, COUNT(*) FROM orders GROUP BY status

Example 3:
Description: Get the top 10 customers by total spend
SQL: SELECT customer_id, SUM(amount) as total FROM orders GROUP BY customer_id ORDER BY total DESC LIMIT 10

Now convert:
Description: Find all products that haven't been ordered
SQL:

When to use few-shot:

  • Complex formatting requirements
  • Domain-specific conventions
  • When zero-shot gives inconsistent results

Few-shot tips:

  • 3-5 examples is usually enough
  • Cover edge cases in examples
  • Order from simple to complex
  • Match the difficulty of your actual task

Pattern 3: Chain of Thought

For complex reasoning, ask the model to show its work:

Before providing your answer, work through this step by step:

1. Identify the key variables in the problem
2. State any assumptions you're making
3. Work through the logic
4. Verify your reasoning
5. Provide the final answer

Question: A store offers a 20% discount on all items. If you have a 15% off coupon that applies after the store discount, what's the total percentage saved on a $100 item?

Step-by-step reasoning:

When to use:

  • Math problems
  • Multi-step reasoning
  • Logic puzzles
  • Any task where you'd want to "see the work"

Variations:

  • "Think step by step"
  • "Let's work through this systematically"
  • "Break this down into parts"

Pattern 4: Output Constraints

Be explicit about what you do and don't want:

Answer the customer's question.

MUST:
- Use only information from the provided documents
- Include a source citation
- Keep response under 100 words

MUST NOT:
- Speculate or make up information
- Discuss competitors
- Provide medical/legal/financial advice
- Include internal pricing information

If the documents don't contain the answer, respond: "I don't have information on that topic."

Constraint categories:

  • Content constraints (what to include/exclude)
  • Format constraints (length, structure)
  • Safety constraints (topics to avoid)
  • Fallback behavior (what to do when uncertain)

Pattern 5: Structured Output

When you need parseable output, specify the structure exactly:

Extract the following information from the email and respond in valid JSON:

{
  "sender_name": "string",
  "sender_email": "string",
  "subject": "string",
  "intent": "inquiry" | "complaint" | "order" | "other",
  "urgency": "low" | "medium" | "high",
  "action_required": boolean,
  "summary": "string (max 50 words)"
}

Email:
---
[email content]
---

JSON:

Tips:

  • Show the exact schema expected
  • Use type annotations (string, boolean, etc.)
  • Show enum options when applicable
  • Include field constraints (max length)

Pattern 6: Persona Injection

Give the model a detailed persona for consistency:

You are Sarah, a customer support agent at TechCorp with these characteristics:

KNOWLEDGE:
- Deep knowledge of TechCorp's product line
- 3 years experience handling support tickets
- Certified in troubleshooting our enterprise products

COMMUNICATION STYLE:
- Professional but warm
- Uses simple language, avoids jargon
- Confirms understanding before moving on
- Apologizes sincerely for issues

LIMITATIONS:
- Cannot process refunds (must escalate)
- Cannot access customer payment info
- Cannot make promises about future features

Respond to this customer message as Sarah would:

When to use:

  • Customer-facing applications
  • Consistent chatbot personalities
  • Role-playing scenarios

Pattern 7: Negative Examples

Show what NOT to do:

Write a product description for this item.

GOOD example (DO write like this):
"The EcoBottle keeps drinks cold for 24 hours with its vacuum-insulated design. Fits standard cup holders. BPA-free."

BAD example (DON'T write like this):
"This AMAZING revolutionary bottle will CHANGE YOUR LIFE!!! You've NEVER seen anything like it! Buy NOW!!!"

The good example is factual and specific. The bad example uses hype and lacks substance.

Now write a description for:
[product details]

When to use:

  • Tone calibration
  • Avoiding common failure modes
  • Clarifying subtle distinctions

Pattern 8: Self-Verification

Ask the model to check its own work:

Answer the following question, then verify your answer is correct.

Question: [the question]

Your process:
1. Provide your initial answer
2. Check: Does this answer actually address the question asked?
3. Check: Are there any factual errors or assumptions?
4. Check: Is anything important missing?
5. Provide your final verified answer

If you find errors in step 2-4, correct them in step 5.

Variations:

  • "Review your answer for accuracy"
  • "Identify any potential issues with your response"
  • "Rate your confidence in this answer (1-10) and explain"

Pattern 9: Iterative Refinement

Build up complex outputs in stages:

We'll create a project plan in three passes.

PASS 1 - ROUGH OUTLINE
List the major phases of this project (3-5 phases)

[model responds]

PASS 2 - DETAILED BREAKDOWN
For each phase, add:
- Key milestones
- Estimated duration
- Dependencies

[model continues]

PASS 3 - RISK ASSESSMENT
For each phase, identify:
- Top 2 risks
- Mitigation strategies

[model continues]

When to use:

  • Complex documents
  • When quality matters more than speed
  • Multi-faceted outputs

Pattern 10: Explicit Reasoning Modes

Tell the model which type of thinking to apply:

I need you to analyze this decision using three lenses:

ANALYTICAL LENS:
What do the numbers and data say? What's the logical conclusion?

CREATIVE LENS:
What alternatives haven't been considered? What if we did the opposite?

SKEPTICAL LENS:
What could go wrong? What assumptions might be invalid?

Then synthesize these perspectives into a recommendation.

Decision to analyze: [decision details]

Other reasoning modes:

  • First principles vs. pattern matching
  • Optimistic vs. pessimistic
  • Short-term vs. long-term
  • User vs. business perspective

Anti-Patterns to Avoid

❌ Vague Instructions

# Bad
Make this better.

# Good  
Improve this email by making it more concise, adding a clear call-to-action, and fixing grammar issues.

❌ Contradictory Requirements

# Bad
Be concise but also comprehensive and include all details.

# Good
Provide a 2-3 sentence summary, followed by a bullet list of key details.

❌ Hoping for Mind-Reading

# Bad
You know what I mean.

# Good
I want X specifically because of Y context.

❌ Over-Prompting

# Bad (too many constraints cause confusion)
[20 paragraphs of instructions]

# Good
[5-10 clear rules]

Conclusion

Effective prompting isn't magic. It's about:

  1. Clarity — Remove ambiguity
  2. Specificity — State exactly what you want
  3. Examples — Show, don't just tell
  4. Structure — Make output predictable
  5. Constraints — Define boundaries explicitly

Start with the Role + Context + Task + Format pattern. Add other patterns as needed. Test systematically. Iterate.

The best prompts are not clever—they're clear.


What's your go-to prompt pattern?

Enjoyed this article?

Share it with others who might find it useful

AM

Written by Abhinav Mahajan

AI Product & Engineering Leader

I write about building AI systems that work in production—from RAG pipelines to agent architectures. These insights come from real experience shipping enterprise AI.

Keep Exploring

Check out more writing on AI engineering, system design, and building production-ready AI systems.