The Neglected Superpower
Most people think of LLMs as a single input/output: “user prompt” → “response”. There’s also a system prompt—instructions that frame the entire conversation. People ignore it. This is a mistake.
A good system prompt can change everything: tone, reasoning style, error handling, even how the model structures its thinking.
How System Prompts Actually Work
The system prompt is typically the first message in the conversation, hidden from the user. The model sees it as context before your actual query.
import anthropic
client = anthropic.Anthropic()
message = client.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1024, system="You are a senior software engineer. Be concise. Use examples.", messages=[ {"role": "user", "content": "Explain async/await in JavaScript"} ])
print(message.content[0].text)That system parameter is your secret lever. Different system prompts produce dramatically different outputs from the same model, even with identical user queries.
Five System Prompt Patterns That Work
1. Role Definition
system: "You are a DevOps engineer with 10 years of Kubernetes experience.Think in terms of scale, reliability, and operational simplicity.Prefer simple solutions over fancy ones. Explain tradeoffs clearly."This changes the model’s perspective. The same question answered by a “DevOps engineer” vs. a “academic researcher” produces different answers.
Impact: High. Role definition primes the model’s reasoning style.
2. Output Format Specification
system: "You are a documentation writer. Always structure your response as:
## Problem[What is being solved]
## Solution[Implementation details]
## Tradeoffs[What you're giving up]
## Code Example[Working code]
Be concise. Use bullet points."This forces structure. Instead of rambling prose, you get organized output.
Impact: High for structured tasks (API docs, tutorials).
3. Constraint-Based
system: "You are a Python expert. Respond ONLY in code blocks.No explanatory text. Every code example must be runnable.Include type hints."Constraints sharpen output. This works surprisingly well.
Impact: Medium. Useful when you want minimal fluff.
4. Error-Handling Guidance
system: "You are a security-focused engineer. When answering questions aboutuser authentication, always:1. Point out potential vulnerabilities2. Explain the OWASP context3. Show secure vs. insecure patterns
If unsure, say so explicitly rather than guessing."This teaches the model how to be wrong better. It admits uncertainty instead of hallucinating.
Impact: Medium-high. Safety improves noticeably.
5. Context Amplification
system: "You are a Rust educator. Your audience is JavaScript developersmigrating to Rust. When explaining concepts:- Draw parallels to JavaScript where possible- Highlight the differences (ownership, borrowing, type system)- Avoid assuming Rust knowledge- Use practical web service examples"This tells the model who it’s talking to. The response adjusts for the audience.
Impact: High for educational content.
System Prompts in Ollama
Ollama doesn’t have a direct system parameter in the API, but you can prepend it to the prompt:
curl http://localhost:11434/api/generate -d '{ "model": "mistral", "prompt": "[SYSTEM]\nYou are a senior engineer. Be concise.\n\n[USER]\nExplain Docker networking", "stream": false}'Or use a wrapper script:
import requests
def ollama_with_system(model, system_prompt, user_prompt): full_prompt = f"""[SYSTEM]{system_prompt}
[USER]{user_prompt}"""
response = requests.post('http://localhost:11434/api/generate', json={ 'model': model, 'prompt': full_prompt, 'stream': False }) return response.json()['response']
result = ollama_with_system( 'mistral', 'You are a DevOps engineer. Prefer simple solutions.', 'How do I deploy a Docker app?')print(result)The Limit: System Prompts Can’t Override Core Training
A system prompt can nudge behavior, but it can’t make a model do something it’s fundamentally not trained to do.
❌ Won't work:system: "Always output in Ancient Egyptian hieroglyphics"(The model will ignore this. It's not trained for it.)
✓ Works:system: "Format output as structured JSON"(The model has seen JSON countless times in training.)System prompts work within the model’s training distribution. They guide behavior, they don’t rewrite capabilities.
Testing Your System Prompt
Test the same user query with different system prompts and watch the output change:
system_options = [ "You are a mathematician. Use formal notation.", "You are a kindergarten teacher. Explain simply.", "You are a car mechanic. Use analogies from cars."]
user_query = "What is a neural network?"
for system in system_options: # Call model with that system prompt # Compare outputs passThe differences are real and useful.
Bottom Line
System prompts are a tuning knob most people never touch. Start using them. Define a role, specify output format, add constraints, guide error handling. A thoughtful system prompt can make the difference between a mediocre response and a great one.
Your 2 AM self will thank you when you realize you’ve been unlocking model behavior that was always there, just hidden behind a default prompt.