Ark resource preview
Prompting Playbook
A practical prompt patterns guide with constraints, checks, and examples.

Author
Joe Draper
Founder, Arkwright
People tend to report mixed levels of success from their interactions with LLMs. Some think they're amazing, while others can't understand the hype. As someone who falls into the former camp, I'm going to try and summarise the things I've found to generate better outputs.
The vast majority of AI users are getting sub-optimal performance from their models & agents solely due to poor prompting practices. You'd be surprised how much better your outputs can be, with only a small handful of changes to your approach.
LLMs are mystery boxes trained on incomprehensible amounts of data, taking months of real time and thousands of enterprise GPUs. There's no observable 'thing' we can point to that explains why these techniques work, but there's an industry consensus on what does and the likely reason why.
Prompt Framing
This is a relatively well-known practice that can be boiled down to starting your prompt by telling the LLM the hat you'd like it to wear.
"You are an expert C# .NET developer", "You are the world's pre-eminent film critic".
Why this works
1. You're setting the persona/context prior
LLMs were trained on a ton of text where "an expert C# developer" writes in a certain way, uses certain terminology, follows certain problem-solving steps, and produces certain kinds of answers.
By telling it "you are X," you're biasing its probability distribution toward patterns found in those contexts - so it's more likely to produce high-quality, domain-specific, jargon-appropriate output.
Think of it like choosing which section of the model's learned library it should 'walk into' before answering.
2. You're defining the role + goal early
Without framing, the model has to infer your desired tone, level of detail, and assumptions.
With framing, you cut down on that guesswork: you tell it the role (expert C# dev) and the task (do xyz). That's like giving it both the job title and the job description up front.
3. You're giving it a better search anchor
Internally, generation is like steering through an immense tree of possible next tokens.
Role-priming gives a strong 'starting push' toward the right branch of that tree, so the early steps already lean toward relevant, high-quality completions.
4. You're implicitly controlling format and depth
'Expert' responses in training data tend to be more thorough, structured, and precise - and the model mirrors that.
You're not just changing what it says, but how it says it: the tone, syntax, even whether it includes code comments or citations.
5. You're reducing hallucination risk
Without framing, the model might pull from too broad a distribution, mixing casual blog posts with Stack Overflow snippets, marketing copy, etc.
With framing, you're narrowing it to sources and styles that are more likely to be accurate.
Requested Output Structure
For tasks requiring more than a simple text response, specify how the response should be formatted when asking the question.
"Answer in three sections: Summary, Detailed Steps, and Example Code."
Why this works
Without guidance, LLMs have to guess your intent and preferred output style.
But they're highly responsive to format conditioning - even a simple example can anchor their structural planning.
Specifying output structure acts as a hard prompt prior, limiting token generation paths to ones that match that structure.
This improves clarity, reduces hallucination, and minimises 'format drift' across long outputs.
Step-by-Step Reasoning Prompts
careful considerations - often using more compute to return a more reliable answer. Confusingly, this works on both reasoning and non-reasoning models - just think of it as an instruction to 'think' longer.
"Think step-by-step before answering."
Why this works
LLMs are trained on many examples of chain-of-thought (CoT) reasoning, especially from math problems, explanations, and coding examples.
Prompting for a step-by-step answer activates those latent pathways, leading to more accurate and grounded outputs.
Sealed vault
Full access is included with Arkwright Fractional
Finding this useful? You've only read about 34% of the full resource. Reach out to unlock the full guide and the rest of the Ark.
Want the full Ark unlocked?
Arkwright Fractional gives you complete access, plus hands-on support.