Skip to content
NeuralSkills
Prompting

Meta-Prompting

Use AI to generate, evaluate, and improve prompts — turning the model into your prompt engineering assistant.

Advanced Free Published: April 15, 2026
Compatible Tools claude-codechatgptgeminicopilotcursorwindsurfuniversal

The Problem

Writing effective prompts is itself a skill that takes iteration. Most people write a prompt, get mediocre results, tweak it manually, and repeat — a slow, inefficient cycle. Meta-prompting flips this: you ask the AI to analyze your prompt, identify weaknesses, and generate an improved version. The model becomes your prompt engineering partner.

The Prompt

You are an expert prompt engineer. I will give you a prompt that I have been using. Your job is to:

1. ANALYZE: Identify what the prompt does well and where it falls short
2. DIAGNOSE: List specific failure modes this prompt is vulnerable to
3. IMPROVE: Write an improved version that addresses every weakness
4. EXPLAIN: For each change, explain why it improves the output

MY CURRENT PROMPT:
"""
[paste your existing prompt here]
"""

CONTEXT:
- I use this prompt for: [task description]
- The model I use: [model name]
- Common problems I see: [list recurring issues with the output]

Example Output

ANALYSIS:
Your prompt is good at defining the task but weak on output format and constraints.

FAILURE MODES:
1. No format specification — the model guesses between prose, bullets, and code
2. Missing persona — generic responses instead of expert-level analysis
3. No negative constraints — the model adds unnecessary disclaimers

IMPROVED PROMPT:
"""
You are a senior security engineer. Analyze this code for vulnerabilities.

Output format: numbered list, one vulnerability per item.
Each item: [severity: critical/high/medium/low] — description — fix (code snippet).

Do not: explain what the code does, suggest general best practices, or add disclaimers.
Only report actual vulnerabilities found in the provided code.

CODE:
[code here]
"""

CHANGES EXPLAINED:
1. Added persona ("senior security engineer") — triggers deeper security knowledge
2. Defined exact output format — eliminates formatting inconsistency
3. Added negative constraints — prevents irrelevant padding

When to Use

Use meta-prompting when you have a prompt that works “okay” but not great, when you are developing reusable prompt templates for your team, or when you want to systematically improve prompt quality over time. It is also excellent for learning prompt engineering by seeing how an expert would restructure your prompts.

Pro Tips

  • Include your failed outputs — show the model what went wrong so it can diagnose the specific cause, not just guess.
  • Ask for multiple variants — request 3 improved versions targeting different optimization goals (brevity, accuracy, creativity).
  • Iterate the meta-prompt itself — if the improvements are shallow, refine your meta-prompt with more context about your failure modes.
  • Build a prompt library — save your best meta-prompted results as templates for future reuse across projects.