Skip to content

[bot] LiteLLM: text_completion() / atext_completion() not instrumented #401

@braintrust-bot

Description

@braintrust-bot

Summary

The Braintrust LiteLLM integration instruments completion, acompletion, responses, aresponses, image_generation, aimage_generation, embedding, aembedding, moderation, speech, aspeech, transcription, atranscription, rerank, and arerank. However, LiteLLM also exposes text_completion() and atext_completion() for legacy OpenAI-style text completions (the /completions endpoint), and these are completely uninstrumented.

text_completion() is a documented, actively supported LiteLLM function that provides access to text completion models like gpt-3.5-turbo-instruct and text-davinci-003. It accepts a prompt string (not chat messages) and returns a TextCompletionResponse. LiteLLM translates this across all supported providers. The async variant atext_completion() also exists (confirmed via LiteLLM Router and GitHub issue BerriAI/litellm#7105).

The Braintrust LiteLLM docs page states the integration traces "Chat and text completion / acompletion calls", but the code only instruments completion()/acompletion() (chat completions), not the separate text_completion()/atext_completion() functions.

What is missing

No tracing spans are created when users call litellm.text_completion() or litellm.atext_completion() through either wrap_litellm() or patch_litellm().

There are:

  • No text_completion or atext_completion patchers in py/src/braintrust/integrations/litellm/patchers.py
  • No _text_completion_wrapper or _atext_completion_wrapper_async in py/src/braintrust/integrations/litellm/tracing.py
  • No text_completion test coverage in py/src/braintrust/integrations/litellm/test_litellm.py
  • Zero matches for text_completion anywhere in py/src/braintrust/integrations/litellm/

Current _ALL_LITELLM_PATCHERS in patchers.py (lines 125–141):

Function Patched?
litellm.completion / acompletion Yes
litellm.responses / aresponses Yes
litellm.image_generation / aimage_generation Yes
litellm.embedding / aembedding Yes
litellm.moderation Yes (sync only)
litellm.speech / aspeech Yes
litellm.transcription / atranscription Yes
litellm.rerank / arerank Yes
litellm.text_completion / atext_completion No

Implementation notes

text_completion() returns a TextCompletionResponse with a similar structure to chat completions but uses choices[].text instead of choices[].message. The existing _completion_wrapper logic in tracing.py could likely be adapted with minor changes to handle the text completion response shape. Token usage fields (prompt_tokens, completion_tokens, total_tokens) are the same.

Braintrust docs status

The Braintrust LiteLLM integration page mentions "text completion" in its description, but the SDK does not actually instrument text_completion().

Upstream sources

Local repo files inspected

  • py/src/braintrust/integrations/litellm/patchers.py — 15 patchers defined; none for text_completion or atext_completion
  • py/src/braintrust/integrations/litellm/tracing.py — no text_completion wrapper functions
  • py/src/braintrust/integrations/litellm/test_litellm.py — no text_completion tests
  • py/src/braintrust/integrations/litellm/__init__.py

Relationship to existing issues

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions