Co Pilot Prompt Anatomy

Updated 01/03/2026

Somewhat structured discovery into the anatomy of a prompt when using VS Code and Co Pilot to understand what makes up the context window. This is to try understand what files it uses and in what order, things like its prompt files, custom instructions and custom agents. This helps me understand how better to use it.

This updated post was mostly based on the video from Burke Holland - Level Up Your VS Code Productivity (Mastering AI Workflows), this is the dude that gave us Beast Mode so when I saw the video I was pretty excited because he knows his sh!t :)

GitHub Copilot is closed source (proprietary), you don’t get access to its source code, model weights, or internal workings. So this is not the actual text that Co-Pilot is following, its the best guess based on research and linked references.

Context Window

We need to use the context window in the most efficient manner possible because of context rot.

Context rot is the gradual degradation of an AI assistant’s usefulness as a conversation grows longer, as the context window fills up with back-and-forth messages, the model:

  • Gets “distracted” by earlier, now-irrelevant content
  • Loses focus on the current task
  • May contradict earlier decisions
  • Becomes slower and more expensive

Agent System Prompt

The agent bootstraps itself

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Core Identity and Glocal Rules
"You are an intelligent AI coding assistant"

# General Instructions (Model Specific)
# Models can have various quirks, like being aggressive about writing the code out to the chat instead of writing to the file
# So these are guard rails to give the user a better experience
"Never print out a codeblock with file changes..."

# Tool Use Instructions for tools that are included in CoPilot, examples: edit, terminal and to-do list tools
"Dont call the run in terminal multiple times in parallel..."

# Output Format Instructions
"How to format the output in chat for tokenization of things like file links..."

# Custom Instructions: Example: nextjs.instructions.md
These will always come before the copilot instructions

# Custom Instructions: copilot-instructions.md

# Custom Agents
- Always added below Custom Instructions

Custom Agents

This used to be called Modes.

Agents give the flow an identity which is very much like an Agent System Prompt.

These can be used to pass instructions to override / augment the default agent behavior. CoPilot in VS Code comes with some defaults like Agent, Ask, Edit and Plan.

You can store these globally (User Data) or local to your code base .github/agents/foo.agent.md which can be checked into source control if you like.

Handoffs

Custom Instructions: Example: nextjs.instructions.md

These can be anything you like, its dependant on your code base. Many exist at awesome-copilot.

You can store these globally (User Data) or local to your code base .github/instructions/foo.instructions.md which can be checked into source control if you like.

Custom Instructions: copilot-instructions.md

This contains high level information about your project that might help the LLM. This should be ~20 to 50 lines long. Its things like the architecture, patterns and is focused on the projects specific approaches.

You can create this manually and save to .github/copilot-instructions.md or use Co-Pilot to generate for you.

User Prompt

Information about the the users Environment & Workspace

1
2
3
4
5
6
7
8
9
10
# Environment Info
- Information about the OS, ect

# Workspace Info in text format
Project
Folder1
file1.txt
file2.txt
Folder2
Folder3

User Prompt

Context information

1
2
3
4
5
6
7
8
9
10
11
12
# Prompt Files
- The content of the prompt file that was used, only included if one was used
- This changes the value of the user request below to something like "Follow the instructions in [LINK TO FILE] and then [INCLUDES THE TEXT YOU PASSED THE PROMPT IF ANY]

# Context Info
- Current Date/Time, list of open terminals ect

# Editor Context
- Any files that you have added to the chat

# User Request, which is your prompt input
- "Hello World"

Prompt Files

These are re-uasble prompts that you can define and access as slash commands, example /remember Something related to this code base

You can store these globally (User Data) or local to your code base .github/prompts/foo.prompt.md which can be checked into source control if you like.

Assistant Message

This is the response from the LLM

1
- "Hello! How can I help you today?"

References