Does "You Are an Expert ___" Actually Help? Designing Prompts When You Hand Work to an AI
Does the "You are an expert ___" pattern actually work? We unpack when role-setting helps, when it doesn't, and why goals, criteria, output format, and constraints matter more than job titles when you delegate work to an AI.
Who this article is for
In the previous post, "Before You Hunt for the 'Ultimate Copy-Paste Prompt': Preparing the Inputs You Hand to an AI," we focused on getting the context right before talking to the model. This post moves one layer up and looks at the prompt itself.
It's written for people who want to bring AI development or AI usage into their work but aren't sure how to start. If you've browsed prompt collections, you've seen the pattern: nearly every entry opens with "You are an expert PM / engineer / marketer / editor / consultant." Reading those, it's natural to wonder whether assigning a role really does the work.
This post is the practical answer.
TL;DR: role-setting alone is weak, but it isn't useless
To put the conclusion up front:
- Writing "You are an expert ___" by itself rarely stabilizes output quality.
- That said, role-setting isn't useless. It works under specific conditions.
- What actually matters is not who the model pretends to be, but what it should produce, from which angle, in which format, under which constraints.
The center of gravity in prompt design is the goal, the lens, the output shape, and the decision criteria. The role line is a supporting hint, not the load-bearing wall.
Why "You are an expert ___" alone falls flat
Take a prompt like this:
You are an expert PM. Please summarize the following meeting notes nicely.
(meeting notes pasted below)
What comes back is usually "something that looks like a summary." It reads fine, but try using it as the input for the next decision and it tends to wobble. The reason is simple: the model has not been told what to keep, what to drop, who the reader is, or how granular the output should be.
"Expert PM" is too thin to carry that weight. The model just blends a thin layer of PM-flavored phrasing on top of whatever it would have written anyway. The definition of "expert" lives in your head, not in the prompt, and the model can only act on what's in the prompt.
The same goes for "nicely." Without naming the audience, the model defaults to a generic register. Generic is safe and almost never useful in real work.
When role-setting actually starts to help
The role line begins to pay off when it's followed by four specific things:
- The lens: review angles, evaluation axes, what to focus on
- The criteria: how to decide what's good, acceptable, or out
- The reader: who the output is for
- The deliverable contract: what counts as "done"
For example: "You are a senior engineer doing a security review. Look only at authentication, authorization, input validation, and logging, framed against OWASP Top 10. Return findings bucketed as Critical / High / Medium / Low with file:line references." Output quality jumps noticeably.
What's doing the work isn't the title. It's the lens and criteria attached to the title. Drop the title and keep the rest, and you'll get most of the benefit.
A bad example and a better one
Concretely:
Bad:
You are an expert PM. Please summarize the following meeting notes nicely.
Better:
Summarize the following meeting notes for next week's steering committee.
## Reader
- Three executives (two of whom are not product specialists)
- They want decisions and current risks, not technical detail
## Output format
- Three sections: "Decided," "Open items," "Action items by next meeting"
- One sentence per item; add background only when essential, in parentheses
## Decision criteria
- Drop chit-chat and impressions that don't drive decisions
- Always capture amounts, deadlines, and owners when present
- Mark anything you inferred as "(inferred)"
## Handling open items
- Don't promote unsettled topics into "decided"
- If opinions split in the notes, keep both sides as one line each
## Input
(meeting notes)
The role line is gone, on purpose. The output is still substantially more reliable than the first version, because what the model needed wasn't "who am I?" but "what, from which angle, in what shape, under which constraints?"
The four things that actually move the needle
After enough delegation work, the gap between prompts that work and prompts that don't reduces to four ingredients:
- Goal: why you need the output, what it feeds into next
- Constraints: tech, deadlines, budget, internal rules, what data is in scope, how to handle inference
- Output format: headings, length, granularity, tone, Markdown / JSON / table
- Decision criteria: what stays, what goes, how "good" is defined
If those four are in place, the role line is decoration. If those four are missing, no amount of elaborate role-playing will hold the output steady.
In AI development, specs beat roles
When you ask Claude Code, Codex, or Cursor to write code, this gets even more pronounced.
"You are an expert engineer" changes essentially nothing. What changes things is the spec, the assumptions, and the acceptance criteria:
- What you're trying to build, and the use case behind it
- Which areas are safe to touch and which are off-limits
- The conventions, framework choices, and testing approach in this repo
- What "done" means (acceptance criteria)
- Performance, compatibility, and security constraints
Think about how an experienced engineer briefs a junior. They don't say "act as a senior engineer." They say "this layer is owned by another team, leave it alone," "add the test here," "match the existing pattern." Concrete preconditions are what make the junior's output reliable. The same is true for an AI.
Meeting logs follow the same shape
Zooming out from role-setting, the same structure applies when you hand a meeting log to an AI.
Paste raw meeting notes and ask for a summary, and you'll get something that looks like a summary. Granularity drifts, decisions blur with opinions, and it's unclear who the reader is.
What's needed isn't "play the role of an excellent meeting-notes summarizer." It's converting the log into something the AI can actually reason over in advance. That conversion is exactly what the prompt in the previous post is doing.
Once the input is shaped, the model does the job without theatrical role-play. If the input isn't shaped, no role-play will save it.
Recap
- "You are an expert ___" alone doesn't stabilize output
- Role-setting earns its place when it's paired with the lens, criteria, reader, and deliverable contract
- The real load-bearing pieces are goal, constraints, output format, and decision criteria
- In AI-assisted development, spec / preconditions / acceptance criteria outperform role lines
- Ambiguous inputs like raw meeting logs need to be reshaped before the model can reason over them
The unglamorous habit of writing those four ingredients every time outperforms clever role tricks over the long run.
Closing: when the input is right, the role line is optional
The previous post was about preparing the inputs before you talk to the model. This one moved one layer up and looked at the prompt itself.
Read the two together and the pattern is the same: what makes delegation to an AI work isn't assigning a role. It's giving the model the inputs and criteria it needs to make the call.
v2p is being built around that "shape the input" part, so it lives inside daily workflow instead of being a manual step every time. The idea is to convert meetings and scattered notes into something an AI or an engineer can pick up directly, without you re-doing the conversion by hand. Less prompt theatre, more well-shaped input. It's the boring half, and it's the half that keeps paying off.