Skip to content

Commit d9c6e43

Browse files
committed
fix(ai): increase OpenRouter max_tokens to 16384
4096 truncates extraction responses for verbose models via OpenRouter. Bump to 16384 as interim fix — proper solution is catalog-driven token limits per model.
1 parent 291ac94 commit d9c6e43

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

api/app/lib/ai_providers.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1028,7 +1028,7 @@ def extract_concepts(
10281028
{"role": "system", "content": system_prompt},
10291029
{"role": "user", "content": f"Text to analyze:\n\n{text}"},
10301030
],
1031-
max_tokens=4096,
1031+
max_tokens=16384,
10321032
temperature=0.3,
10331033
response_format={"type": "json_object"},
10341034
)

0 commit comments

Comments
 (0)