Skip to content

Latest commit

Β 

History

History
174 lines (133 loc) Β· 4.1 KB

File metadata and controls

174 lines (133 loc) Β· 4.1 KB

Contributing to TempoEval

Thank you for your interest in contributing to TempoEval! πŸŽ‰

πŸš€ Getting Started

Development Setup

  1. Fork and clone the repository

    git clone https://github.com/YOUR_USERNAME/tempoeval.git
    cd tempoeval
  2. Create a virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install in development mode

    pip install -e ".[dev]"
  4. Install pre-commit hooks (optional but recommended)

    pip install pre-commit
    pre-commit install

Running Tests

# Run all tests
pytest tests/ -v

# Run with coverage
pytest tests/ --cov=tempoeval --cov-report=html

# Run specific test file
pytest tests/test_core.py -v

πŸ“ How to Contribute

Reporting Bugs

  1. Check existing issues to avoid duplicates
  2. Create a new issue with:
    • Clear title and description
    • Steps to reproduce
    • Expected vs actual behavior
    • Python version and OS
    • Minimal code example if possible

Suggesting Features

  1. Open a discussion or issue describing:
    • The problem you're trying to solve
    • Your proposed solution
    • Alternative approaches considered

Submitting Pull Requests

  1. Create a feature branch

    git checkout -b feature/amazing-feature
  2. Make your changes

    • Follow the code style (we use Black for formatting)
    • Add tests for new functionality
    • Update documentation if needed
  3. Run tests locally

    pytest tests/ -v
  4. Commit your changes

    git commit -m "feat: add amazing feature"

    We follow Conventional Commits:

    • feat: - New feature
    • fix: - Bug fix
    • docs: - Documentation only
    • test: - Adding tests
    • refactor: - Code refactoring
  5. Push and create PR

    git push origin feature/amazing-feature

🎨 Code Style

  • Formatting: We use Black with line length 120
  • Imports: Sorted with isort
  • Linting: Ruff for fast linting
  • Type hints: Encouraged for public APIs
# Format code
black tempoeval/ --line-length=120
isort tempoeval/

# Check linting
ruff check tempoeval/

πŸ“ Project Structure

tempoeval/
β”œβ”€β”€ core/           # Core classes (FocusTime, Evaluator, etc.)
β”œβ”€β”€ metrics/        # All temporal metrics
β”‚   β”œβ”€β”€ retrieval/  # Layer 1: Retrieval metrics
β”‚   β”œβ”€β”€ generation/ # Layer 2: Generation metrics
β”‚   β”œβ”€β”€ reasoning/  # Layer 3: Reasoning metrics
β”‚   └── composite/  # TempoScore
β”œβ”€β”€ llm/            # LLM provider integrations
β”œβ”€β”€ datasets/       # Dataset loaders
β”œβ”€β”€ guidance/       # Temporal guidance generation
β”œβ”€β”€ efficiency/     # Cost & latency tracking
└── utils/          # Utility functions

πŸ§ͺ Adding New Metrics

  1. Create metric file in appropriate directory (metrics/retrieval/, etc.)
  2. Inherit from base class (BaseRetrievalMetric, BaseGenerationMetric, etc.)
  3. Implement required methods:
    • compute() - Synchronous computation
    • acompute() - Async computation (if LLM-based)
  4. Add to exports in __init__.py
  5. Write tests in tests/
  6. Add documentation

Example:

from tempoeval.core.base import BaseRetrievalMetric

class MyNewMetric(BaseRetrievalMetric):
    name = "my_new_metric"
    requires_llm = False
    
    def compute(self, **kwargs) -> float:
        # Implementation
        return score

πŸ“– Documentation

  • Documentation is built with MkDocs
  • API docs are auto-generated from docstrings
  • Update docs/ for new features
# Preview documentation locally
mkdocs serve

πŸ™ Thank You!

Every contribution, no matter how small, helps make TempoEval better for everyone. We appreciate your time and effort!


Questions? Open an issue or reach out to the maintainers.