Skip to content
GitHub

Base Prompt Scaffold

The base prompt scaffold is the prompt outline we use when we want dependable, reviewable results. Instead of starting with a blank page, you fill in a few well-labeled buckets, reuse the parts that never change, and give the model a clear target to hit.

  • When to use it: Any time you expect the prompt to be re-used or reviewed (internal assistants, customer copilots, scripted automations).
  • Who it helps: Prompt builders, subject matter experts, and anyone who has to approve or maintain an AI workflow.
  • What is below: An explainer of each section, a copy-ready template, a walkthrough of how to apply it, and examples that show the scaffold in action.

Think of the scaffold as a lightweight XML outline. The outer tags keep the system rules separate from the task request, and the nested tags give each piece of information a clear home. That separation makes it easy for multiple people to re-use the same prompt without creating a new one for each task.

  • Directoryfull_prompt: The container that keeps system and user directives bundled together.
    • system_prompt: Defines the non-negotiables (guardrails, tone, refusal behavior).
    • Directoryuser_prompt: Captures the task-specific request from the requester.
      • agent_role: Clarifies the hat the assistant is wearing (analyst, reviewer, coder).
      • agent_tool_capabilities: Lists tools, APIs, or data sources the agent may leverage.
      • reasoning_process: Explains how the agent should think, structure work, or verify outputs.
      • core_objective: States the measurable outcome we need the model to achieve.
      • task_instructions: Provides step-by-step guidance or acceptance criteria.
      • task_constraints: Describes policy limits, deadlines, or compliance guardrails.
      • additional_context: Supplies reference material, inputs, or edge cases.
      • positive_example_output: Shares a concrete example of success (optional).
      • negative_example_output: Shows what bad looks like (optional but powerful).
      • desired_output_format: Specifies the exact format, schema, or channel for delivery.

The hierarchy ensures we can safely reuse a consistent system instruction across tasks, while the nested user prompt captures what changes from engagement to engagement.

Start with this template whenever you draft a new prompt. Every field is wrapped in braces so you can see what still needs your voice before the prompt ships.

<!-- Base prompt scaffold -->
<full_prompt>
    <system_prompt>{{ Summarize the assistant's enduring policies, tone, and refusal behavior. }}</system_prompt>
    <user_prompt>
        <agent_role>{{ Describe the persona or domain expertise the agent must adopt. }}</agent_role>
        <agent_tool_capabilities>{{ List the approved tools, APIs, or datasets, including constraints. }}</agent_tool_capabilities>
        <reasoning_process>{{ Outline the reasoning style: step-by-step, chain-of-thought, etc. }}</reasoning_process>
        <core_objective>{{ State the measurable outcome, including success criteria. }}</core_objective>
        <task_instructions>{{ Provide numbered steps, acceptance tests, or heuristics the agent must follow. }}</task_instructions>
        <task_constraints>{{ Call out policies, deadlines, formatting rules, or data privacy requirements. }}</task_constraints>
        <additional_context>{{ Reference links or supporting data the agent can cite. }}</additional_context>
        <positive_example_output>{{ Show an example that demonstrates the gold-standard response. }}</positive_example_output>
        <negative_example_output>{{ Show an example that should never be reproduced and explain why. }}</negative_example_output>
        <desired_output_format>{{ Specify the exact format: JSON schema, Markdown template, plain text, etc. }}</desired_output_format>
    </user_prompt>
</full_prompt>
ElementPurposeRequired?
system_promptSets the ground rules the assistant must follow every time.Yes
agent_roleFrames the persona and decision lens the model should adopt.Yes
agent_tool_capabilitiesGrants or denies access to tools, APIs, datasets, and file systems.Conditional
reasoning_processExplains how the agent should structure its thinking so the output is auditable.Recommended
core_objectiveDefines success in measurable terms.Yes
task_instructionsBreaks the work into explicit steps the agent must follow.Yes
task_constraintsKeeps the agent inside legal, compliance, or operational guardrails.Recommended
additional_contextPacks in the references, datasets, or policy snippets the agent can quote.Recommended
positive_example_outputShows the gold standard response.Optional
negative_example_outputIllustrates a response that should be rejected.Optional
desired_output_formatLocks the response into a predictable shape for downstream tools or readers.Yes

Follow the playbook below whether you are drafting a quick FAQ helper, an analytical copilot, or a multi-tool automation. Each phase calls out the extra checks to run for different task types.

    • Clarify the outcome: Ask the requester what success looks like, who will consume the output, and how the result will be used downstream.
    • Collect source material: Pull policies, datasets, dashboards, or past deliverables. For knowledge tasks, highlight the sections you want quoted; for analytics, grab KPI definitions and data freshness notes; for automations, list the systems or APIs involved.
    • Map the guardrails: Document privacy rules, brand tone, service-level expectations, and any escalation paths before you type a single instruction.
    • Fill one element at a time: Start with system_prompt and agent_role, then move through the nested fields in order. Focus on plain, reviewer-friendly sentences.
    • Tune per task type:
      • Knowledge responses thrive on a rich additional_context block and a precise desired_output_format.
      • Analytical summaries rely on a detailed reasoning_process and a table-driven output schema.
      • Automations need explicit agent_tool_capabilities, pre-flight safety checks inside task_instructions, and hard limits inside task_constraints.
    • Document open questions: Use TODO tags or inline comments so reviewers can resolve missing details without guessing.
    • Run sample inputs: Try edge cases (missing data, conflicting instructions) and typical requests. Capture both the raw model response and your notes.
    • Inspect against criteria: Confirm the output format, tone, citations, and safety behaviors match the spec. If you ask for a JSON schema, validate it with a linter or consumer script.
    • Invite reviewers: Pair with the subject matter expert for accuracy checks and with an operator or engineer to verify tool usage and logging.
    • Launch in a safe channel: Use a shadow environment, limited pilot, or sandbox automation queue. Track examples where the prompt shines and where it falters.
    • Capture telemetry: Log success rate, escalation count, and any manual overrides. For tool-based agents, confirm traces include correlation IDs and audit data.
    • Refine incrementally: Adjust individual fields rather than rewriting the prompt; this keeps diffs easy to audit.
    • Version and store: Check the prompt into a shared repository or PromptHub with metadata (owner, use case, review date).
    • Schedule reviews: Set quarterly or release-based checkpoints to confirm the references, policies, and tool access are still valid.
    • Monitor drift: Add guardrail tests to your CI where possible, and keep a feedback channel open (form, Teams workflow) so frontline users can report issues.

Below are three prompts that grow in complexity. Skim them to see how the same scaffold flexes from a single-answer FAQ to a multi-step automation. While you read, pay attention to:

  • How the core_objective anchors the outcome.
  • The balance between task_instructions (what to do) and task_constraints (what to avoid).
  • Whether the desired_output_format sets up the next team or system for success.

Scenario. HR keeps getting repeat questions in Teams about parental leave. We want the bot to answer the same way every time and cite the policy accurately.

Inputs gathered.

  • Latest HR policy PDF excerpt (section 4.1 - 4.3).
  • FAQ spreadsheet showing the top five employee questions.
  • Tone guidance: empathetic, direct, no legal advice.
<full_prompt>
    <system_prompt>
        You are an Ascend HR policy assistant. Always cite the official policy ID and decline requests outside HR scope.
    </system_prompt>
    <user_prompt>
        <agent_role>You are a concise HR knowledge concierge.</agent_role>
        <agent_tool_capabilities>No tool calls; answer only from the supplied policy excerpt.</agent_tool_capabilities>
        <reasoning_process>Skim all provided context, extract the applicable clauses, then draft a summary.</reasoning_process>
        <core_objective>Deliver a 3-paragraph overview of parental leave that includes eligibility, duration, and how to initiate a claim.</core_objective>
        <task_instructions>
            1. Confirm the employee persona (US-based FTE).  
            2. Surface the most recent policy language only.  
            3. Highlight any actions the employee must take.
        </task_instructions>
        <task_constraints>Keep the tone empathetic but neutral; do not promise benefits beyond the policy.</task_constraints>
        <additional_context>See excerpt: Policy HR-PL-2024 Rev B.</additional_context>
        <positive_example_output>Eligibility section, leave duration table, initiation steps.</positive_example_output>
        <negative_example_output>Past policy versions or country-specific clauses.</negative_example_output>
        <desired_output_format>Markdown with H3 headings and bullet lists.</desired_output_format>
    </user_prompt>
</full_prompt>

What success looked like.

  • Pass: Covers eligibility, duration, and initiation steps in separate paragraphs as requested.
  • Pass: Cites policy HR-PL-2024 Rev B and links to the intranet copy.
  • Warning: Politely refuses questions about unrelated benefits and offers to route the employee to the general benefits portal.

Common pitfalls to watch.

  • Forgetting to lock tone: without the constraint, the agent occasionally sounds robotic.
  • Mixing statistics from older policy versions if the excerpt is not clearly labeled in additional_context.
  • Responding in long paragraphs when desired_output_format is not explicit about headings.

Teams that adopt the scaffold will realize a couple of repeatable wins. First, drafting time drops because everyone is filling blanks instead of inventing structure. Second, handoffs become painless: teammates can read the scaffold and immediately see what stays fixed versus what changes dynamically based on the task.

There is also a reliability angle. When every prompt shares the same output contract, downstream bots, dashboards, and automations stay stable even as the narrative evolves. Engineers can compare prompts side-by-side when troubleshooting, and operations teams can trace regressions to a specific field instead of diffing entire prompt essays.

Treat the scaffold like any other reusable pattern: keep the core, tailor the edges. During early pilots you can omit optional sections, but as soon as the workflow touches a customer or feeds an automation, invest in positive_example_output and negative_example_output so everyone agrees on what good and bad look like. Teams in regulated domains should expand the system_prompt with references to the exact policy clauses or regulatory requirements that matter; the more precise those guardrails are, the easier it is to defend the workflow during audits.

Whenever you introduce new tooling, update agent_tool_capabilities with the API name, allowed verbs, rate limits, and logging expectations. If the agent gains write access, double down on task_constraints and make sure escalation steps are explicit. Version each change, link it to a ticket, and capture who signed off; this habit turns the prompt into a fully traceable asset.