The gap between average and elite AI results is the prompt. With the right frameworks and prompt patterns, teams cut revisions by eighty percent and ship production ready work faster. This guide goes beyond theory with copy paste prompt packs, practice labs, and quality rubrics you can use today.
Explore more prompt recipes in Prompt Engineering. Pair these techniques with tools in writing tools, coding tools, research tools, and productivity tools, or see all AI tools.
What Is Prompt Engineering
Prompt engineering is conversational programming. You set role, context, constraints, steps, and output format so the model can reason and deliver exactly what you need. Treat each prompt as a spec. Good specs produce reliable results.
Why It Matters
- Precision, removes ambiguity and cuts rework.
- Efficiency, consistent first try outputs save time and tokens.
- Quality unlock, advanced model skills surface with the right cues.
Frameworks That Never Fail, CLEAR plus Patterns
CLEAR
- Context, background and constraints.
- Length, exact size or ranges.
- Examples, style anchors or mini samples.
- Audience, who will consume it and why.
- Role, the expert persona you want the model to adopt.
Prompt Patterns to Know
- CoT, chain of thought with step numbers for reasoning tasks.
- ReAct, reason then act with tool calls or searches.
- Few shot, two to three examples that define the mapping you want.
- Self check, ask the model to validate outputs against a rubric.
- JSON schema, force structured outputs for downstream tools.
Model Specific Power Prompts
ChatGPT, Creative plus Structured Output
Role, senior editor and SEO strategist.
Task, write a 1200 word article outline about <topic> with skimmable H2 H3 and bullet points.
Constraints, sentences under 20 words, no fluff, no em dash, include 5 internal link cues like "see all AI tools" and "browse categories".
Add, a 155 character meta description and a JSON block with title, slug, and 6 SEO keywords.
Output, first the outline in Markdown, then the JSON as a separate code block.
Claude, Long Briefs and High Fidelity Instructions
You are a principal technical writer.
Objective, produce a step by step guide for <task>.
Input, the following 3 source snippets, synthesize and remove contradictions.
Constraints, numbered steps, callouts for risks, and a final checklist.
Validation, append a "Quality Gate" section that confirms accuracy against the source in 5 bullets.
Output format, Markdown only.
Gemini, Research plus Tables and Source Reminders
Role, research analyst.
Task, summarize current approaches to <topic> and produce a comparison table.
Request, include short citations and a "confidence" note if data is uncertain.
Deliverables, 1 paragraph summary, a 6 column table, and a list of open questions for follow up.
Output, Markdown with a final JSON block listing the table headers.
Multimodal and Code Prompt Recipes
Design Review from Image
Act as a UX reviewer. Input image, <screenshot description or attach image>.
Goals, improve readability, hierarchy, and conversions.
Deliver, 10 specific issues with severity tags, then a revised copy deck, then CSS level suggestions.
Output, Markdown with a final checklist.
Midjourney Style Brief, Marketing Visual
Subject, <product> hero on neutral studio background.
Style, modern commercial, soft key light, crisp edges, subtle reflections.
Composition, centered 3/4 angle, clean negative space for copy.
Palette, brand primary and two accents.
Quality, high detail, realistic materials, minimal post.
Negative prompt, watermark, logo, extra text, warped hands.
Code Scaffolding with Tests
Role, senior Python engineer.
Build, a function that deduplicates and normalizes email domains from a CSV.
Inputs, file path, delimiter.
Outputs, cleaned CSV and a JSON summary of unique domains and counts.
Constraints, handle malformed rows, log errors, type hints, and unit tests using pytest.
Return, code only in two blocks, module then tests.
Guardrails, Anti Hallucination and Compliance Prompts
Refuse When Uncertain
Before answering, check if the prompt lacks facts.
If any critical fact is missing, reply with "I do not have enough information to answer confidently" and ask for exactly the missing fields in a numbered list.
Source Bound Output
Only use the provided excerpts to answer.
If an answer cannot be derived from them, say "insufficient evidence in sources".
Append a list of snippet IDs used for each claim.
Policy and Tone Guard
Maintain neutral tone, avoid legal or medical advice.
Flag risky claims and add a "Verification Needed" note.
If asked for policy prohibited actions, decline and suggest a safe alternative.
Power User Blueprints by Role
SEO Content Brief Generator
Role, SEO lead. Topic, <keyword>.
Deliverables, search intent, angle, reader pain points, outline with H2 H3, FAQ list, internal link opportunities using anchors like "see all AI tools" and "browse categories", and a meta description under 155 chars.
Output, Markdown then a JSON block with url_slug and 8 keywords.
Marketing Email Sequence from Product JSON
Role, lifecycle marketer.
Input JSON, product features, personas, objections.
Task, produce a 4 email sequence for trial to paid.
Constraints, subject under 45 chars, preview under 90, body 120 to 180 words, one CTA.
Output, array of emails in JSON plus a plain text version.
Analyst Report from CSV
Role, data analyst.
Task, summarize key trends in the uploaded CSV.
Deliver, top 5 insights with evidence lines, one anomalies section, and a recommended next actions list with priority tags.
Output, Markdown with one small table.
Engineer, Bug Repro and Fix Plan
Role, debugging assistant.
Input, stack trace and steps to reproduce.
Deliver, root cause hypothesis, minimal repro script, fix plan, and regression test ideas.
Output, Markdown and a final checklist.
Pro tip, apply this today
Use the SEO Content Brief Generator blueprint to refresh product and category pages from our e commerce growth guide. Start with your top 20 SKUs, lock tone and internal links, then A B test meta and hero copy.
Prompt Testing Harness and Self Critique
Self Critique Rubric
Evaluate the draft against this rubric from 1 to 5 for, relevance, completeness, correctness, clarity, and actionability.
List the three weakest areas and revise the draft once to address them.
Return, the revised draft and the rubric table.
JSON Output Contract
Return JSON that validates against this schema,
{ "title": "string", "meta_description": "string", "headings": ["string"], "internal_links": ["string"] }.
If any field is missing, say "schema error" and output nothing else.
AB Prompt Experiments
Generate two prompt variants, A and B, that differ only in constraints and examples.
Explain the intended effect of the difference in one sentence.
Return both prompts and a one paragraph hypothesis.
Practice Lab, Three 15 Minute Drills
Drill 1, Turn Vague to Specific
Start with a vague prompt, improve it using CLEAR, add a short example, and force a schema. Track first try success.
Drill 2, Multimodal Critique
Describe a screenshot and request a UX critique plus a revised copy deck and CSS suggestions.
Drill 3, Research Table
Ask for a summarized table on a topic with citations and a confidence note, then request a follow up questions list.
Measure What Matters
- First try success rate, aim for seventy percent or better.
- Iteration count, target one to two refinements.
- Token efficiency, reduce context bloat and force concise outputs.
- Time to quality, under ten minutes for publish ready drafts.
Maintain a prompt library by use case and model. Review and retire underperformers monthly. For workflow helpers see productivity tools.
Conclusion
Great prompts behave like great specs. Use CLEAR and proven patterns, add guardrails, test variants, and track results. Build a living prompt library and improve it weekly. Find more patterns in Prompt Engineering, compare engines in see all AI tools, and browse writing tools, coding tools, research tools, and productivity tools to match prompts with the best engines.