Skip to content

AWS Bedrock Model Cost Tracking Support #1202

@devin-ai-integration

Description

@devin-ai-integration

AWS Bedrock Model Cost Tracking Support

Problem Description

Users integrating AgentOps with AWS Bedrock models are encountering cost tracking failures with the warning:

"Unable to calculate cost - This might be because you're using an unrecognized model."

This prevents essential cost monitoring and budget management for production deployments using AWS Bedrock.

User Impact

Affected Users:

  • Developers using CrewAI with AWS Bedrock LLM integration
  • Teams making direct AWS Bedrock API calls
  • Any framework using Bedrock model identifiers

Business Impact:

  • ❌ No visibility into LLM spending for budget management
  • ❌ Blocks production deployments requiring cost monitoring
  • ❌ Affects AgentOps adoption in AWS-centric environments
  • ❌ Forces manual cost calculation workarounds

Current Behavior

Model Example: bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0

Framework: CrewAI with AWS Bedrock LLM integration

Issue: Cost metrics show as $0.00 with warning message instead of calculated costs

Expected Behavior

AgentOps should automatically recognize AWS Bedrock model identifiers and calculate costs based on current Bedrock pricing:

  • Input tokens: $3.00 per 1M tokens
  • Output tokens: $15.00 per 1M tokens

Technical Analysis

Current Cost Tracking Architecture

  1. Core Cost Calculation: app/api/agentops/api/models/span_metrics.py line 277

    completion_cost = costs.calculate_cost_by_tokens(tokens, self.model_for_cost, direction)
  2. Model Lookup System: Lines 20-24 in span_metrics.py

    MODEL_LOOKUP_ALIASES = {
        "sonar-pro": "perplexity/sonar-pro",
        "sonar": "perplexity/sonar",
    }
  3. TokenCost Integration: Uses tokencost library for model pricing data

    • app/api/agentops/api/event_handlers.py line 6: from tokencost import TOKEN_COSTS
    • app/opentelemetry-collector/builder/costs/__init__.py: Model cost management

Root Cause

The tokencost library doesn't include AWS Bedrock model naming patterns like:

  • bedrock/anthropic.claude-3-5-sonnet-*
  • bedrock/anthropic.claude-3-sonnet-*
  • bedrock/anthropic.claude-3-haiku-*

Proposed Solutions

1. Extend Model Lookup Aliases (Quick Fix)

Add Bedrock patterns to MODEL_LOOKUP_ALIASES in span_metrics.py:

MODEL_LOOKUP_ALIASES = {
    "sonar-pro": "perplexity/sonar-pro",
    "sonar": "perplexity/sonar",
    # AWS Bedrock mappings
    "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0": "anthropic/claude-3-5-sonnet-20240620",
    "bedrock/anthropic.claude-3-sonnet-20240229-v1:0": "anthropic/claude-3-sonnet-20240229",
    "bedrock/anthropic.claude-3-haiku-20240307-v1:0": "anthropic/claude-3-haiku-20240307",
}

2. Pattern Matching Support (Recommended)

Implement regex/wildcard pattern matching in model_for_cost property to handle versioned identifiers automatically:

def _resolve_bedrock_model(self, model_name: str) -> Optional[str]:
    """Convert Bedrock model names to tokencost equivalents."""
    bedrock_patterns = {
        r"bedrock/anthropic\.claude-3-5-sonnet-.*": "anthropic/claude-3-5-sonnet-20240620",
        r"bedrock/anthropic\.claude-3-sonnet-.*": "anthropic/claude-3-sonnet-20240229", 
        r"bedrock/anthropic\.claude-3-haiku-.*": "anthropic/claude-3-haiku-20240307",
    }
    
    for pattern, mapped_model in bedrock_patterns.items():
        if re.match(pattern, model_name):
            return mapped_model
    return None

3. Custom Model Configuration API (Future Enhancement)

Allow users to configure custom model pricing:

agentops.configure_model(
    model_name="bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0",
    input_cost_per_1k_tokens=0.003,
    output_cost_per_1k_tokens=0.015
)

4. Upstream TokenCost Contribution

Contribute Bedrock model definitions to the tokencost library's model_prices.json.

Implementation Priority

  1. High Priority: Pattern matching support (I want an API Key #2) - Handles current and future Bedrock models
  2. Medium Priority: Model lookup aliases (Create enum for session end states and event results #1) - Quick fix for immediate relief
  3. Low Priority: Custom configuration API (Python event timing is weird #3) - Advanced use cases
  4. Ongoing: Upstream contribution (Sessions are not sent to the dashboard until after end_session #4) - Long-term solution

Acceptance Criteria

  • Bedrock model identifiers are recognized and mapped to pricing data
  • Cost calculations work for all major Bedrock Anthropic models
  • Pattern matching handles versioned model identifiers automatically
  • No breaking changes to existing cost tracking functionality
  • Unit tests cover new Bedrock model mapping logic
  • Documentation updated with Bedrock cost tracking examples

Related Files

  • app/api/agentops/api/models/span_metrics.py - Core cost calculation logic
  • app/api/agentops/api/event_handlers.py - TokenCost integration
  • app/opentelemetry-collector/builder/costs/__init__.py - Model cost management
  • docs/v2/integrations/ - Integration documentation

Test Case

# Should calculate costs instead of returning $0.00
model = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
prompt_tokens = 1000
completion_tokens = 500

expected_prompt_cost = 1000 * 0.000003  # $0.003
expected_completion_cost = 500 * 0.000015  # $0.0075
expected_total_cost = 0.0105

Labels: enhancement, cost-tracking, aws-bedrock, crewai-integration
Priority: High
Effort: Medium

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions