Skip to content

merge master

aac8a1d
Select commit
Loading
Failed to load commit list.
Merged

test(langchain): Add text completion test #5740

merge master
aac8a1d
Select commit
Loading
Failed to load commit list.
@sentry/warden / warden: find-bugs completed Mar 24, 2026 in 2m 48s

2 issues

find-bugs: Found 2 issues (2 high)

High

Test filters for wrong span operation type, will never match LLM spans - `tests/integrations/langchain/test_langchain.py:156`

The test filters for spans with op == "gen_ai.pipeline", but the langchain integration's on_llm_start method creates spans with op = OP.GEN_AI_GENERATE_TEXT which equals "gen_ai.generate_text". This causes the filter to find zero spans, and the assertion assert len(llm_spans) > 0 will fail, making the test ineffective.

Test expects incorrect span description "Langchain LLM call" instead of "generate_text gpt-3.5-turbo" - `tests/integrations/langchain/test_langchain.py:161`

The test asserts that llm_span["description"] == "Langchain LLM call", but the langchain integration's on_llm_start method sets the span name (which becomes the description) to f"generate_text {model}".strip(). For the model "gpt-3.5-turbo", this would be "generate_text gpt-3.5-turbo".


Duration: 2m 40s · Tokens: 918.4k in / 6.2k out · Cost: $1.57 (+extraction: $0.01, +merge: $0.00, +fix_gate: $0.00)

Annotations

Check failure on line 156 in tests/integrations/langchain/test_langchain.py

See this annotation in the file changed.

@sentry-warden sentry-warden / warden: find-bugs

Test filters for wrong span operation type, will never match LLM spans

The test filters for spans with `op == "gen_ai.pipeline"`, but the langchain integration's `on_llm_start` method creates spans with `op = OP.GEN_AI_GENERATE_TEXT` which equals `"gen_ai.generate_text"`. This causes the filter to find zero spans, and the assertion `assert len(llm_spans) > 0` will fail, making the test ineffective.

Check failure on line 161 in tests/integrations/langchain/test_langchain.py

See this annotation in the file changed.

@sentry-warden sentry-warden / warden: find-bugs

Test expects incorrect span description "Langchain LLM call" instead of "generate_text gpt-3.5-turbo"

The test asserts that `llm_span["description"] == "Langchain LLM call"`, but the langchain integration's `on_llm_start` method sets the span name (which becomes the description) to `f"generate_text {model}".strip()`. For the model "gpt-3.5-turbo", this would be "generate_text gpt-3.5-turbo".