GEO

System Prompt

A system prompt is the top-level instruction that tells an LLM "who you are, what you should do, and what you should not do," setting the frame for the entire conversation. Unlike user prompts — which end users write — system prompts are injected by the app developer and stay in force across every turn.

A system prompt is the top-level instruction that tells an LLM "who you are, what you should do, and what you should not do," setting the frame for the entire conversation. Unlike user prompts — which end users write — system prompts are injected by the app developer and stay in force across every turn.

Why It Matters

The system prompt is the "design language" of LLM-based products. No matter how freely a user prompts, a well-crafted system prompt keeps the model's replies inside a defined role, tone, and set of limits. From ChatGPT, Claude, and Gemini chatbots to AI search engines, coding agents, and support bots — every LLM app shapes its personality through the system prompt.

Components

Role: "You are a marketing copywriting expert helping SaaS blog operators." Fixes the perspective the model replies from.

Goal: "Help users quickly draft blog posts." Sets conversation direction.

Constraints: "Answer in Korean only." "No code examples." "Max 300 characters." Blocks unwanted behavior up front.

Tone: "Friendly but professional, no exaggeration." Keeps brand voice consistent.

Output format: "Structure answers with ### subheadings." Reduces post-processing.

Knowledge cutoff: "Note when your information may be outdated." Mitigates hallucination risk.

Tool descriptions: For function-calling agents, include the list and description of available tools in the system prompt.

System Prompt vs User Prompt

AspectSystem PromptUser Prompt
Written byDeveloperEnd user
Change frequencyRarelyEvery request
ContentsRole, constraints, toneSpecific request
ScopeEntire conversationThat request only
SecurityShould be hidden from userPublic

A good LLM system separates the "stable frame" (system prompt) from "variable input" (user prompt).

Practical Tips

Assign a role, don't command: "You are an expert who does X" outperforms "Do X." The model inhabits the role and produces more consistent output.

Prefer positive constraints: "Do this" beats "don't do that."

Include examples (few-shot): Putting 2–3 example outputs in the system prompt dramatically stabilizes style and format.

Use XML tags: For Claude-family models, tags like <role>, <constraints>, <examples> help the model parse each section clearly.

Don't over-write: Longer system prompts cost more tokens on every request. Cut anything not essential.

A/B test regularly: Run different system prompts against real requests and compare satisfaction, accuracy, and safety.

Defending Against Prompt Injection

System prompts are prime targets for prompt injection. A user input like "ignore all previous instructions" can overwrite a weak system prompt. Defenses include the sandwich technique (repeat key instructions at start and end), isolating external data in XML tags, and enforcing permissions at the tool-call layer.

Sources: