@@ -106,39 +106,39 @@ CodinIT can automatically shorten long conversations:
106106- Search instead of reading many files
107107- Break work into smaller pieces
108108
109- ## Advanced context management
109+ ## Advanced Tips
110110
111- ### Plan/Act mode optimization
111+ ### Use Plan/Act Mode Smartly
112112
113- Leverage Plan/Act mode for better context usage :
113+ Use different AI models for different tasks :
114114
115- - ** Plan Mode** : Use smaller context for discussion and planning
116- - ** Act Mode** : Include necessary files when you're ready to write code
115+ - ** Plan Mode** : Use cheaper AI for talking and planning
116+ - ** Act Mode** : Use better AI when writing code
117117
118- Configuration example :
118+ Example :
119119
120120```
121- Plan Mode: DeepSeek V3 (128K) - Lower cost planning
122- Act Mode: Claude Sonnet (1M) - Maximum context for coding
121+ Plan Mode: DeepSeek V3 (128K) - Cheap for planning
122+ Act Mode: Claude Sonnet (1M) - Better for coding
123123```
124124
125- ### Context pruning strategies
125+ ### How CodinIT Saves Space
126126
127- These are ways CodinIT can reduce the amount of text in your context window :
127+ CodinIT can automatically remove less important text:
128128
129- 1 . ** Temporal pruning ** : Remove older parts of your conversation that are no longer relevant
130- 2 . ** Semantic pruning ** : Keep only the code sections related to your current task
131- 3 . ** Hierarchical pruning ** : Keep the big picture but remove fine details
129+ 1 . ** Remove old messages ** : Delete old parts of the conversation
130+ 2 . ** Keep relevant code ** : Only keep code related to what you're doing now
131+ 3 . ** Keep summaries ** : Keep the main ideas but remove details
132132
133- ### Token counting tips
133+ ### Counting Tokens
134134
135- #### Rough estimates
135+ #### Quick math
136136
137- - ** 1 token ≈ 0.75 words ** (so 1,000 tokens is about 750 words)
138- - ** 1 token ≈ 4 characters **
137+ - ** 1 token ≈ 3/4 of a word ** (1,000 tokens = about 750 words)
138+ - ** 1 token ≈ 4 letters **
139139- ** 100 lines of code ≈ 500-1000 tokens**
140140
141- #### File size guidelines
141+ #### File sizes
142142
143143| File Type | Tokens per KB |
144144| -------------- | ------------- |
@@ -147,34 +147,34 @@ These are ways CodinIT can reduce the amount of text in your context window:
147147| ** Markdown** | ~ 200-300 |
148148| ** Plain text** | ~ 200-250 |
149149
150- ## Context window FAQ
150+ ## Common Questions
151151
152- ### Q: Why do responses get worse with very long conversations?
152+ ### Q: Why does the AI get worse with long conversations?
153153
154- ** A:** Models can lose focus with too much context. The "effective window" is typically 50-70% of the advertised limit .
154+ ** A:** The AI loses focus when there's too much to remember. It works best with 50-70% of its maximum .
155155
156- ### Q: Should I use the largest context window available ?
156+ ### Q: Should I always use the biggest context window?
157157
158- ** A:** Not always. Larger contexts increase cost and can reduce response quality. Match the context to your task size .
158+ ** A:** No. Bigger contexts cost more and can make the AI worse. Use what you need for your task.
159159
160- ### Q: How can I tell how much context I'm using?
160+ ### Q: How do I know how much context I'm using?
161161
162- ** A:** CodinIT shows token usage in the interface . Watch for the context meter approaching limits .
162+ ** A:** CodinIT shows you a meter . Watch it to see when you're getting close to the limit .
163163
164- ### Q: What happens when I exceed the context limit?
164+ ### Q: What happens if I go over the limit?
165165
166- ** A:** CodinIT will either :
166+ ** A:** CodinIT will:
167167
168- - Automatically compact the conversation (if enabled )
169- - Show an error and suggest starting a new task
170- - Truncate older messages (with warning)
168+ - Automatically shorten the conversation (if you turned that on )
169+ - Show an error and tell you to start a new chat
170+ - Remove old messages (with a warning)
171171
172- ## Recommendations by use case
172+ ## What to Use When
173173
174- | Use Case | Recommended Context | Model Suggestion |
174+ | What You're Doing | Context Size | Which AI |
175175| ----------------------- | ------------------- | ----------------- |
176176| ** Quick fixes** | 32K-128K | DeepSeek V3 |
177- | ** Feature development ** | 128K-200K | Qwen3 Coder |
178- | ** Large refactoring ** | 400K+ | Claude Sonnet 4.5 |
179- | ** Code review ** | 200K-400K | GPT-5 |
180- | ** Documentation ** | 128K | Any budget model |
177+ | ** Building features ** | 128K-200K | Qwen3 Coder |
178+ | ** Big changes ** | 400K+ | Claude Sonnet 4.5 |
179+ | ** Reviewing code ** | 200K-400K | GPT-5 |
180+ | ** Writing docs ** | 128K | Any cheap model |
0 commit comments