Guide

Common Prompting Mistakes and How to Fix Them

Approx. 11 min read • Quick checklist

1. Being vague about the goal

“Improve this text” or “Write better copy” gives the model almost no direction. Always specify what “better” means in your context: shorter, clearer, more persuasive, more formal, aligned with a particular brand voice, and so on.

2. Forgetting about the audience

Good prompts name the audience explicitly. “Explain this to a senior backend engineer” is very different from “explain this to a non‑technical stakeholder”. The same applies to tone: internal memo vs. public blog post vs. customer‑facing email.

3. No constraints on length or format

If you do not specify length or format, models tend to default to long, generic answers. When you know the constraints of your channel or UI, put them in the prompt: character limits, number of bullet points, sections and headings, required fields.

4. Asking for too much in one go

Chaining smaller prompts usually works better than one giant request. For example, first ask the model to extract key facts, then to generate options, then to critique those options against your criteria. Each step becomes easier to audit and improve.

5. Over‑trusting the first answer

Models are confident, not necessarily correct. Build a habit of asking for alternatives, edge cases and failure modes. For important tasks, always validate outputs with your own judgement or with automated checks where possible.