Skip to content

Commit cafa9e1

Browse files
authored
feat(adapters): context mgmt in anthropic and openai_responses (#3049)
1 parent e003fac commit cafa9e1

26 files changed

Lines changed: 1177 additions & 436 deletions

.codecompanion/ui.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,7 @@ Manages the overlaying of icons in the chat buffer for tools, based on their sta
5656

5757
### Tests
5858

59-
@./tests/interactions/chat/test_builder.lua
59+
@./tests/interactions/chat/ui/test_builder_state.lua
60+
@./tests/interactions/chat/ui/test_fold_reasoning_output.lua
6061

6162
Comprehensive tests for the builder pattern covering state management, section detection, reasoning transitions, and header logic. Tests verify that the builder correctly manages formatting state across multiple message additions and properly detects when new sections or headers are needed.

.github/workflows/copilot-setup-steps.yml

Lines changed: 0 additions & 49 deletions
This file was deleted.

doc/.vitepress/config.mjs

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ export default withMermaid(
128128
{ text: "Introduction", link: "/" },
129129
{ text: "Installation", link: "/installation" },
130130
{ text: "Getting Started", link: "/getting-started" },
131-
{ text: "Upgrading", link: "/upgrading" },
131+
{ text: "Architecture", link: "/architecture" },
132132
{
133133
text: "Agent Client Protocol (ACP)",
134134
link: "agent-client-protocol",
@@ -141,6 +141,8 @@ export default withMermaid(
141141
text: "Configuration",
142142
collapsed: true,
143143
items: [
144+
{ text: "Upgrading", link: "/configuration/upgrading" },
145+
144146
{ text: "Action Palette", link: "/configuration/action-palette" },
145147
{ text: "Adapters - ACP", link: "/configuration/adapters-acp" },
146148
{ text: "Adapters - HTTP", link: "/configuration/adapters-http" },

doc/architecture.md

Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
---
2+
description: Architectural concepts and design principles behind CodeCompanion.
3+
---
4+
5+
# Architecture
6+
7+
This section of the documentation covers architectural concepts and design principles that underpin CodeCompanion's functionality.
8+
9+
This is not mandatory reading for users of CodeCompanion. It may be of interest to those who are looking to understand some of the technical details of how CodeCompanion works, or those who are looking to contribute to the project.
10+
11+
## How Context Is Managed
12+
13+
One of the limitations of working with LLMs is that of context, as they have a finite window with which they can respond to a user's ask. That is, there's only a certain amount of data that LLMs can reference in order to generate a response. To equate this to human terms, it can be thought of as [working memory](https://en.wikipedia.org/wiki/Working_memory) and it varies greatly depending on what model you're using. The context window is measured in [tokens](https://platform.claude.com/docs/en/about-claude/glossary#tokens).
14+
15+
When a user breaches the context window, the conversation **ends** and it **cannot** continue. This can be hugely inconvenient in the middle of a coding session and potentially time consuming to recover from. CodeCompanion has context awareness which means it can prevent this from happening by taking **preventative** action.
16+
17+
### In the Chat Buffer
18+
19+
> [!NOTE]
20+
> CodeCompanion enables context management by default
21+
22+
If you're using the `openai_responses` or `anthropic` adapters, then CodeCompanion will use their native server-side compaction capabilities. Please see their respective documentation [here](https://developers.openai.com/api/docs/guides/compaction) and [here](https://platform.claude.com/docs/en/build-with-claude/compaction) for more information.
23+
24+
To be updated...
25+
26+
Firstly, CodeCompanion manages context by paying close attention to the number of tokens in the [chat buffer](/usage/chat-buffer/index), matching them against a defined trigger threshold in your config, which can be [customised](/configuration/chat-buffer#context-management).

0 commit comments

Comments
 (0)