How to Write Better AI Prompts: A Practical Guide
The difference between a mediocre AI response and an excellent one is almost always the prompt. You don’t need a “prompt engineering certification” — you need 10 techniques that work. Here they are, with examples.
1. Be Specific About What You Want
Bad prompt: “Write me a blog post about dogs.”
Good prompt: “Write an 800-word blog post about the three best dog breeds for apartment living. Target audience is first-time dog owners in their 20s-30s. Include size, energy level, noise level, and grooming needs for each breed. Conversational tone.”
The first prompt gives the AI no constraints. The second tells it exactly what to produce. More constraints = better output, every time.
ELI5: Prompt Engineering — Prompt engineering is the skill of asking AI questions in a way that gets the best possible answer. It’s like being a good interviewer — a vague question gets a vague answer, but a specific, well-structured question gets exactly what you need. The “engineering” part just means being deliberate about how you structure your request.
2. Give It a Role
Telling the AI who it is shapes the response dramatically.
“You are a senior Python developer with 15 years of experience. Review this code for security vulnerabilities, performance issues, and readability. Explain each issue and provide the fix.”
“You are an experienced copywriter who specializes in email marketing for e-commerce brands. Write a cart abandonment email sequence.”
The role primes the model to draw on the right patterns. A “senior developer” gives different code review than a “coding tutor” — both useful, but for different purposes.
3. Show, Don’t Just Tell
Instead of describing the format you want, give an example:
“Summarize this article in the following format:
Headline: [One-sentence summary] Key Points:
- [Point 1]
- [Point 2]
- [Point 3] So What: [Why this matters in one sentence]
Here’s the article: [paste article]”
Examples are worth more than instructions. The AI pattern-matches to your example format with near-perfect consistency.
4. Use Chain-of-Thought for Complex Tasks
For problems that require reasoning, ask the AI to think step-by-step:
“A company’s revenue grew from $2M to $3.2M in 2025. They hired 5 new salespeople at $80K each. Was the revenue growth worth the hiring cost? Think through this step by step before giving your conclusion.”
Without “think step by step,” the AI often jumps to a conclusion. With it, the AI shows its math, catches its own errors, and produces more accurate answers. This works especially well for math, logic, and analysis tasks.
ELI5: Chain-of-Thought — When you ask a kid a math problem and they just blurt out an answer, they’re often wrong. When you say “show your work,” they get it right more often because working through each step catches mistakes. Chain-of-thought prompting is the same idea — you ask the AI to show its work, and it produces better answers because it reasons through each step instead of guessing.
5. Set Constraints
Constraints improve quality by eliminating the AI’s tendency to ramble or overcomplicate:
- “Answer in 3 sentences or fewer”
- “Use only words a 10-year-old would understand”
- “Respond with a JSON object containing these fields: …”
- “Don’t use the words ‘delve,’ ‘tapestry,’ or ‘landscape’”
- “Write at an 8th-grade reading level”
The last constraint is particularly useful. AI models default to a slightly formal, slightly verbose style. Forcing a reading level produces more natural writing.
6. Provide Context Before the Ask
Structure your prompts as: context → task → format.
“Context: I run a 10-person SaaS startup. We’re deciding between Stripe and Paddle for payment processing. Our customers are primarily in the US and EU. We need subscription billing, one-time payments, and tax compliance.
Task: Compare Stripe and Paddle for our use case. Focus on pricing, tax handling, and implementation complexity.
Format: Comparison table followed by a recommendation with reasoning.”
Front-loading context means the AI’s entire response is shaped by your specific situation, not generic information.
7. Ask for Multiple Options
Don’t accept the first response. Ask for alternatives:
“Give me 5 different subject lines for a product launch email. Make them diverse in approach — try curiosity, urgency, social proof, humor, and direct benefit.”
“Suggest 3 different architectures for this system. For each, explain the tradeoffs between simplicity, scalability, and cost.”
This forces the AI to explore different approaches instead of giving you the most statistically average answer.
8. Use the System Prompt for Persistent Instructions
If you’re using the API (or custom GPTs/Claude projects), the system prompt sets behavior for the entire conversation:
“You are a financial analyst assistant. Always cite your sources. When you don’t know something, say so instead of guessing. Format numbers with commas and two decimal places. Default to USD unless specified otherwise.”
Rules in the system prompt apply to every response without repeating them. This is the most efficient way to maintain consistent behavior.
ELI5: System Prompt — The system prompt is like the instruction manual you give to a new employee on their first day. “Here’s how we do things around here — always be polite, always double-check the numbers, and if you’re not sure about something, ask.” The AI follows these instructions for the entire conversation, so you don’t have to repeat the rules every time you ask a question.
9. Iterate, Don’t Start Over
When the AI gives you a 70% answer, don’t rewrite your entire prompt. Build on what it gave you:
- “This is good but too formal. Rewrite it in a more casual, conversational tone.”
- “Keep the structure but make each section half the length.”
- “The second paragraph is wrong — [correct information]. Fix it and keep everything else.”
Iterating is faster and produces better results than re-prompting from scratch, because the AI uses its previous response as context.
10. Know When to Use Which Model
Different models have different strengths:
- Complex reasoning, analysis, long documents: Claude Opus 4 or GPT-4o
- Creative writing, brainstorming: Claude Sonnet 4 (tends to be more natural)
- Code generation: GPT-4o or Claude Opus 4
- Simple tasks, classification, extraction: Claude Haiku or GPT-4o-mini (cheaper, faster)
- Very long documents: Gemini 2.0 Pro (1M token context) or Claude (200K)
The best prompt in the world won’t fix a mismatch between your task and your model. Use the right tool.
Prompts That Work Across All Models
These templates work with ChatGPT, Claude, Gemini, and most other models:
For analysis: “Analyze [topic/data/document]. List the 3 most important findings. For each finding, explain why it matters and what action I should take. Be specific and use numbers where possible.”
For writing: “Write a [length] [format] about [topic]. Target audience: [who]. Tone: [formal/casual/technical]. Include: [specific requirements]. Avoid: [things you don’t want].”
For code: “Write a [language] function that [what it does]. Input: [describe input]. Output: [describe output]. Handle edge cases for [list them]. Include comments explaining the logic.”
For decisions: “I need to choose between [Option A] and [Option B]. My priorities are [list them in order]. Compare the options against each priority. Give me a recommendation with your reasoning.”
The Meta-Skill
The best prompt engineers aren’t people who memorize templates. They’re people who clearly understand what they want before they start typing. If you can’t explain what you need to a smart human intern, you can’t explain it to an AI.
Spend 30 seconds thinking about your ideal output before writing the prompt. What format? How long? What tone? What should be included? What should be excluded? Those 30 seconds save 5 minutes of back-and-forth.
For more on how different models handle prompts, see our Claude vs. GPT comparison.