Updated 01/03/2026
Somewhat structured discovery into the anatomy of a prompt when using VS Code and Co Pilot to understand what makes up the context window. This is to try understand what files it uses and in what order, things like its prompt files, custom instructions and custom agents. This helps me understand how better to use it.
This updated post was mostly based on the video from Burke Holland - Level Up Your VS Code Productivity (Mastering AI Workflows), this is the dude that gave us Beast Mode so when I saw the video I was pretty excited because he knows his sh!t :)
GitHub Copilot is closed source (proprietary), you don’t get access to its source code, model weights, or internal workings. So this is not the actual text that Co-Pilot is following, its the best guess based on research and linked references.
Context Window
We need to use the context window in the most efficient manner possible because of context rot.
Context rot is the gradual degradation of an AI assistant’s usefulness as a conversation grows longer, as the context window fills up with back-and-forth messages, the model:
- Gets “distracted” by earlier, now-irrelevant content
- Loses focus on the current task
- May contradict earlier decisions
- Becomes slower and more expensive
Agent System Prompt
The agent bootstraps itself
1 | # Core Identity and Glocal Rules |
Custom Agents
This used to be called Modes.
Agents give the flow an identity which is very much like an Agent System Prompt.
These can be used to pass instructions to override / augment the default agent behavior. CoPilot in VS Code comes with some defaults like Agent, Ask, Edit and Plan.
You can store these globally (User Data) or local to your code base .github/agents/foo.agent.md which can be checked into source control if you like.
Handoffs
Custom Instructions: Example: nextjs.instructions.md
These can be anything you like, its dependant on your code base. Many exist at awesome-copilot.
You can store these globally (User Data) or local to your code base .github/instructions/foo.instructions.md which can be checked into source control if you like.
- https://github.com/github/awesome-copilot/blob/main/docs/README.instructions.md
- Example: https://github.com/github/awesome-copilot/blob/main/instructions/nextjs.instructions.md
Custom Instructions: copilot-instructions.md
This contains high level information about your project that might help the LLM. This should be ~20 to 50 lines long. Its things like the architecture, patterns and is focused on the projects specific approaches.
You can create this manually and save to .github/copilot-instructions.md or use Co-Pilot to generate for you.
- https://docs.github.com/en/copilot/how-tos/configure-custom-instructions/add-repository-instructions
User Prompt
Information about the the users Environment & Workspace
1 | # Environment Info |
User Prompt
Context information
1 | # Prompt Files |
Prompt Files
These are re-uasble prompts that you can define and access as slash commands, example /remember Something related to this code base
You can store these globally (User Data) or local to your code base .github/prompts/foo.prompt.md which can be checked into source control if you like.
Assistant Message
This is the response from the LLM
1 | - "Hello! How can I help you today?" |