Summary
The Braintrust LiteLLM integration instruments completion, acompletion, responses, aresponses, image_generation, aimage_generation, embedding, aembedding, moderation, speech, aspeech, transcription, atranscription, rerank, and arerank. However, LiteLLM also exposes text_completion() and atext_completion() for legacy OpenAI-style text completions (the /completions endpoint), and these are completely uninstrumented.
text_completion() is a documented, actively supported LiteLLM function that provides access to text completion models like gpt-3.5-turbo-instruct and text-davinci-003. It accepts a prompt string (not chat messages) and returns a TextCompletionResponse. LiteLLM translates this across all supported providers. The async variant atext_completion() also exists (confirmed via LiteLLM Router and GitHub issue BerriAI/litellm#7105).
The Braintrust LiteLLM docs page states the integration traces "Chat and text completion / acompletion calls", but the code only instruments completion()/acompletion() (chat completions), not the separate text_completion()/atext_completion() functions.
What is missing
No tracing spans are created when users call litellm.text_completion() or litellm.atext_completion() through either wrap_litellm() or patch_litellm().
There are:
- No
text_completion or atext_completion patchers in py/src/braintrust/integrations/litellm/patchers.py
- No
_text_completion_wrapper or _atext_completion_wrapper_async in py/src/braintrust/integrations/litellm/tracing.py
- No
text_completion test coverage in py/src/braintrust/integrations/litellm/test_litellm.py
- Zero matches for
text_completion anywhere in py/src/braintrust/integrations/litellm/
Current _ALL_LITELLM_PATCHERS in patchers.py (lines 125–141):
| Function |
Patched? |
litellm.completion / acompletion |
Yes |
litellm.responses / aresponses |
Yes |
litellm.image_generation / aimage_generation |
Yes |
litellm.embedding / aembedding |
Yes |
litellm.moderation |
Yes (sync only) |
litellm.speech / aspeech |
Yes |
litellm.transcription / atranscription |
Yes |
litellm.rerank / arerank |
Yes |
litellm.text_completion / atext_completion |
No |
Implementation notes
text_completion() returns a TextCompletionResponse with a similar structure to chat completions but uses choices[].text instead of choices[].message. The existing _completion_wrapper logic in tracing.py could likely be adapted with minor changes to handle the text completion response shape. Token usage fields (prompt_tokens, completion_tokens, total_tokens) are the same.
Braintrust docs status
The Braintrust LiteLLM integration page mentions "text completion" in its description, but the SDK does not actually instrument text_completion().
Upstream sources
Local repo files inspected
py/src/braintrust/integrations/litellm/patchers.py — 15 patchers defined; none for text_completion or atext_completion
py/src/braintrust/integrations/litellm/tracing.py — no text_completion wrapper functions
py/src/braintrust/integrations/litellm/test_litellm.py — no text_completion tests
py/src/braintrust/integrations/litellm/__init__.py
Relationship to existing issues
Summary
The Braintrust LiteLLM integration instruments
completion,acompletion,responses,aresponses,image_generation,aimage_generation,embedding,aembedding,moderation,speech,aspeech,transcription,atranscription,rerank, andarerank. However, LiteLLM also exposestext_completion()andatext_completion()for legacy OpenAI-style text completions (the/completionsendpoint), and these are completely uninstrumented.text_completion()is a documented, actively supported LiteLLM function that provides access to text completion models likegpt-3.5-turbo-instructandtext-davinci-003. It accepts apromptstring (not chat messages) and returns aTextCompletionResponse. LiteLLM translates this across all supported providers. The async variantatext_completion()also exists (confirmed via LiteLLM Router and GitHub issue BerriAI/litellm#7105).The Braintrust LiteLLM docs page states the integration traces "Chat and text completion / acompletion calls", but the code only instruments
completion()/acompletion()(chat completions), not the separatetext_completion()/atext_completion()functions.What is missing
No tracing spans are created when users call
litellm.text_completion()orlitellm.atext_completion()through eitherwrap_litellm()orpatch_litellm().There are:
text_completionoratext_completionpatchers inpy/src/braintrust/integrations/litellm/patchers.py_text_completion_wrapperor_atext_completion_wrapper_asyncinpy/src/braintrust/integrations/litellm/tracing.pytext_completiontest coverage inpy/src/braintrust/integrations/litellm/test_litellm.pytext_completionanywhere inpy/src/braintrust/integrations/litellm/Current
_ALL_LITELLM_PATCHERSinpatchers.py(lines 125–141):litellm.completion/acompletionlitellm.responses/aresponseslitellm.image_generation/aimage_generationlitellm.embedding/aembeddinglitellm.moderationlitellm.speech/aspeechlitellm.transcription/atranscriptionlitellm.rerank/areranklitellm.text_completion/atext_completionImplementation notes
text_completion()returns aTextCompletionResponsewith a similar structure to chat completions but useschoices[].textinstead ofchoices[].message. The existing_completion_wrapperlogic intracing.pycould likely be adapted with minor changes to handle the text completion response shape. Token usage fields (prompt_tokens,completion_tokens,total_tokens) are the same.Braintrust docs status
The Braintrust LiteLLM integration page mentions "text completion" in its description, but the SDK does not actually instrument
text_completion().Upstream sources
atext_completionconfirmed via: [Bug]: atext_completion for Codestral streaming response BerriAI/litellm#7105Local repo files inspected
py/src/braintrust/integrations/litellm/patchers.py— 15 patchers defined; none fortext_completionoratext_completionpy/src/braintrust/integrations/litellm/tracing.py— notext_completionwrapper functionspy/src/braintrust/integrations/litellm/test_litellm.py— notext_completiontestspy/src/braintrust/integrations/litellm/__init__.pyRelationship to existing issues
patch_litellm()does not patchembeddingormoderation;aembeddingmissing entirely #115 (closed): coveredpatch_litellm()gaps for embedding/moderation — did not mentiontext_completionspeech()/aspeech()not instrumented #165 (closed): coveredspeech/aspeech— did not mentiontext_completionrerank()/arerank()not instrumented #266 (closed): coveredrerank/arerank— did not mentiontext_completiontext_completionin this repo