This file provides guidance to GitHub Copilot when working with code in this repository.
- No Co-authored-by in commit messages - Never add
Co-authored-by:lines to commit messages. Keep commit messages clean without AI attribution footers. - Always write in English - All output text (code comments, commit messages, PR descriptions, issue comments, documentation) must be in English, even if the user writes in another language.
Good tasks for Copilot:
- Bug fixes in existing plot implementations
- Adding new plot types following existing patterns
- Updating documentation
- Writing or improving unit tests
- Code refactoring within established patterns
- Fixing linting/formatting issues
- Updating dependencies (after checking security advisories)
Tasks requiring human review:
- Changes to core architecture or database schema
- Security-sensitive code (authentication, API keys, credentials)
- Complex algorithmic changes requiring domain expertise
- Breaking changes to public APIs
- Infrastructure or deployment configuration changes
How to iterate with Copilot:
- Use
@copilotin PR comments to request changes or corrections - Provide specific, actionable feedback referencing line numbers
- Link to relevant documentation or examples in the codebase
- Request explanations if the approach is unclear
pyplots is an AI-powered platform for Python data visualization that automatically discovers, generates, tests, and maintains plotting examples. The platform is specification-driven: every plot starts as a library-agnostic Markdown spec, then AI generates implementations for all supported libraries.
Supported Libraries (9 total):
- matplotlib, seaborn, plotly, bokeh, altair, plotnine, pygal, highcharts, lets-plot
Core Principle: Community proposes plot ideas via GitHub Issues → AI generates code → AI quality review → Deployed.
# Install dependencies (uses uv - fast Python package manager)
uv sync --all-extras
# Run all tests (unit + integration + e2e)
uv run pytest
# Run only unit tests
uv run pytest tests/unit
# Run only integration tests (uses SQLite in-memory)
uv run pytest tests/integration
# Run only E2E tests (requires DATABASE_URL)
uv run pytest tests/e2e
# Linting (required for CI)
uv run ruff check .
# Auto-fix linting issues
uv run ruff check . --fix
# Formatting (required for CI)
uv run ruff format .cd app
yarn install
yarn dev # Development server
yarn build # Production buildEverything for one plot type lives in a single directory:
plots/{spec-id}/
├── specification.md # Library-agnostic description
├── specification.yaml # Tags, created, issue, suggested
├── metadata/ # Per-library metadata
│ ├── matplotlib.yaml
│ └── ...
└── implementations/ # Library implementations
├── matplotlib.py
├── seaborn.py
└── ...
Example: plots/scatter-basic/ contains everything for the basic scatter plot.
Format: {plot-type}-{variant}-{modifier} (all lowercase, hyphens only)
Examples: scatter-basic, scatter-color-mapped, bar-grouped-horizontal, heatmap-correlation
plots/{spec-id}/: Plot-centric directories (spec + metadata + implementations)prompts/: AI agent prompts for code generation and quality evaluationcore/: Shared business logic (database, repositories, config)core/database/types.py: Custom SQLAlchemy types (PostgreSQL + SQLite compatibility)
api/: FastAPI backend (routers, schemas, dependencies)app/: React frontend (React 19 + TypeScript + Vite 7 + MUI 7)tests/unit/: Unit tests with mocked dependenciestests/integration/: Integration tests with SQLite in-memory databasetests/e2e/: End-to-end tests with full FastAPI stackdocs/: Architecture and workflow documentation
spec-request- New specification requestspec-update- Update existing specificationspec-ready- Specification merged to main, ready for implementations
generate:{library}- Trigger single library generation (e.g.,generate:matplotlib)generate:all- Trigger all 9 libraries via bulk-generateimpl:{library}:done- Implementation merged to mainimpl:{library}:failed- Max retries exhausted (3 attempts)
approved- Human approved specification for mergeai-approved- AI quality check passed (score >= 90, or >= 50 after 3 attempts)ai-rejected- AI quality check failed (score < 90), triggers repair loopquality:XX- Quality score (e.g.,quality:92)
Specification Lifecycle:
[open] spec-request → approved → spec-ready [implementations can start]
Implementation PR Lifecycle:
[open] → impl-review → ai-approved (≥90) → impl-merge → impl:{library}:done
→ ai-rejected (<90) → impl-repair (×3) → ai-approved (≥50) or failed (<50)
- Linter/Formatter: Ruff (enforces PEP 8)
- Line Length: 120 characters
- Type Hints: Required for all functions
- Docstrings: Google style for all public functions
- Import Order: Standard library → Third-party → Local
- Coverage Target: 90%+
- Test Structure: Mirror source structure in
tests/unit/andtests/integration/ - Naming:
test_{what_it_does} - Fixtures: Use pytest fixtures in
tests/conftest.py - Test Types:
- Unit tests (
tests/unit/): Fast, isolated, mocked dependencies - Integration tests (
tests/integration/): SQLite in-memory for API tests - E2E tests (
tests/e2e/): Real PostgreSQL with separatetestdatabase (skipped if DATABASE_URL not set)
- Unit tests (
- Markers: Use
@pytest.mark.unit,@pytest.mark.integration,@pytest.mark.e2e
Plot implementations are simple, readable scripts - like matplotlib gallery examples:
""" pyplots.ai
scatter-basic: Basic Scatter Plot
Library: matplotlib 3.10.0 | Python 3.13
Quality: 92/100 | Created: 2025-01-10
"""
import matplotlib.pyplot as plt
import numpy as np
# Data
np.random.seed(42)
x = np.random.randn(100)
y = x * 0.8 + np.random.randn(100) * 0.5
# Plot
fig, ax = plt.subplots(figsize=(16, 9))
ax.scatter(x, y, alpha=0.7, s=50, color='#306998')
ax.set_title('Basic Scatter Plot')
plt.tight_layout()
plt.savefig('plot.png', dpi=300, bbox_inches='tight')Header format: 4-line docstring with pyplots.ai branding, spec-id, library version, quality score.
Rules: No functions, no classes, no if __name__ == '__main__'. Just: imports → data → plot → save.
- No
preview.pngfiles in repository (stored in GCS) - No hardcoded API keys (use environment variables)
- Backend: FastAPI, SQLAlchemy (async), PostgreSQL, Python 3.10+
- Frontend: React 19, TypeScript, Vite 7, MUI 7
- Package Manager: uv (Python), yarn (Node.js)
- Linting: Ruff (Python)
The project runs on Google Cloud Platform (europe-west4 region):
| Service | Component | Purpose |
|---|---|---|
| Cloud Run | pyplots-backend |
FastAPI API (auto-scaling, serverless) |
| Cloud Run | pyplots-frontend |
React SPA served via nginx |
| Cloud SQL | PostgreSQL 18 | Database (Unix socket in production) |
| Cloud Storage | pyplots-images |
Preview images (GCS bucket) |
| Secret Manager | DATABASE_URL |
Secure credential storage |
| Cloud Build | Triggers | Auto-deploy on push to main |
Automatic deployment on push to main:
api/**,core/**,pyproject.tomlchanges → Backend redeployapp/**changes → Frontend redeploy
Before completing any task:
- All tests pass:
uv run pytest tests/unit tests/integration - Code passes linting:
uv run ruff check . - Code is properly formatted:
uv run ruff format --check . - Type hints are included for all new functions
- Docstrings follow Google style for public functions
- Add integration tests for database-related changes (repositories, models)
Connection: PostgreSQL via SQLAlchemy async + asyncpg
Connection Modes (priority order):
DATABASE_URL- Direct connection (local development)INSTANCE_CONNECTION_NAME- Cloud SQL Connector (Cloud Run)
Database Types (core/database/types.py):
- Custom SQLAlchemy types support both PostgreSQL (production) and SQLite (tests)
- Production: Uses optimized PostgreSQL native types (ARRAY, JSONB, UUID)
- Tests: Uses compatible SQLite types (JSON, String) for in-memory testing
- StringArray: ARRAY(String) in PostgreSQL, JSON text in SQLite
- UniversalJSON: JSONB in PostgreSQL, JSON in SQLite
- UniversalUUID: UUID in PostgreSQL, String(36) in SQLite
Benefits:
- Integration tests run with SQLite in CI (no PostgreSQL needed)
- Production still uses optimized PostgreSQL types
- Same models/code work in both environments
Local Setup:
cp .env.example .env
# Edit .env with DATABASE_URL
uv run alembic upgrade headNote: The API works without database in limited mode (filesystem fallback for specs).
- Package Manager:
uvmay not be available in all environments. Fallback topipif needed. - External Dependencies: Some domains may be blocked. If encountering network issues, document them in the PR.
- Database: Cloud SQL requires proper credentials. Use
.env.exampleas template for local development.
- Documentation: Check
docs/directory for architecture, workflow, and development guides - Examples: Look at existing plot implementations in
plots/for patterns - CLAUDE.md: Additional guidance specific to Claude AI agent workflows
- Issue Templates: Use GitHub issue templates for consistent problem reporting
- Import errors: Run
uv sync --all-extrasorpip install -e ".[dev]"to install dependencies - Test failures: Ensure you're testing only changes related to your task, not pre-existing issues
- Linting failures: Run
uv run ruff check . --fixto auto-fix common issues - Type checking: Add type hints using standard Python typing (e.g.,
list[str],dict[str, int])