| sidebar_position | 10 |
|---|
The framework generates self-contained HTML dashboard reports for analyzing backtest results. Reports work for both single and multi-strategy backtests — no external dependencies required.
:::tip Working with hundreds or thousands of backtests?
A BacktestReport inlines every backtest into a single HTML file, which becomes too heavy for a browser past a few dozen backtests. Use the Backtest Storage Layer to filter your collection down (in SQLite, sub-100 ms) and render reports only over the winners.
:::
from investing_algorithm_framework import BacktestReport
# Single strategy report
report = BacktestReport(backtest)
report.show() # Opens in browser (or renders inline in Jupyter)After running a backtest, pass the result directly:
backtest = app.run_backtest(
backtest_date_range=backtest_range,
initial_amount=1000
)
report = BacktestReport(backtest)
report.show(browser=True)Compare strategies side by side in a single dashboard:
backtest_a = app.run_backtest(...)
backtest_b = app.run_backtest(...)
report = BacktestReport(backtests=[backtest_a, backtest_b])
report.show()This generates a multi-strategy comparison dashboard with:
- Strategy ranking tables (Key Metrics, Trading Activity)
- Return Scenarios projections (Good/Average/Bad/Very Bad Year)
- Normalized equity curves overlay
- Per-strategy detail pages with Summary, Runs, and Performance tabs
- Compare mode with monthly return distribution (Rows/Heatmap × Returns/Growth toggles)
Load previously saved backtests from a directory:
report = BacktestReport.open(directory_path="./my_backtests")
report.show()The open() method recursively finds all valid backtest directories (containing algorithm_id.json and a runs/ folder) and any .iafbt bundle files, and loads them into a single report.
:::tip Optimized .iafbt bundle format
Backtests are saved by default in the framework's custom .iafbt bundle format — a single binary file per backtest combining zstd compression and MessagePack encoding. It is purpose-built for backtest reports: ~21× smaller and ~27× fewer files than the legacy directory format, and BacktestReport.open() loads it ~3× faster. The legacy directory format is still fully supported for backwards compatibility, and you can mix both in the same folder.
For very large batches, opt into parallel loading:
report = BacktestReport.open(directory_path="./my_backtests", workers=4):::
You can also combine disk and in-memory backtests:
report = BacktestReport.open(
backtests=[my_new_backtest],
directory_path="./saved_backtests"
)
report.show()When metric calculations are updated in a newer framework version, previously saved backtests may have stale metrics. Use recalculate_backtests_in_directory to recompute all per-run and summary metrics from the raw portfolio snapshots and trades — directly on disk, without ever loading the full set of backtests into memory:
from investing_algorithm_framework import recalculate_backtests_in_directory
# Rewrites every bundle in ./my_backtests in place
recalculate_backtests_in_directory("./my_backtests")Each backtest is loaded, recalculated, and written back inside a worker process, so the parent process's memory footprint stays flat regardless of how many backtests are processed. This is the recommended approach for any non-trivial batch (hundreds to thousands of backtests with portfolio snapshots and trades can otherwise consume tens of GB).
Write to a different directory instead of in place:
recalculate_backtests_in_directory(
src_dir="./my_backtests",
dst_dir="./my_backtests_v2",
)Use a custom risk-free rate (otherwise each backtest's stored rate is used):
recalculate_backtests_in_directory("./my_backtests", risk_free_rate=0.04)Limit which metrics are recomputed, or tune parallelism:
recalculate_backtests_in_directory(
"./my_backtests",
metrics=["cagr", "sharpe_ratio", "max_drawdown", "win_rate"],
workers=4,
show_progress=True,
)For each backtest, the function:
- Recomputes per-run
BacktestMetricsfrom rawportfolio_snapshotsandtrades - Regenerates
BacktestSummaryMetricsby aggregating the updated per-run metrics - Writes the updated bundle back to disk and (by default) refreshes
index.parquet
:::warning Deprecated: recalculate_backtests(List[Backtest])
The in-memory variant recalculate_backtests(backtests) is deprecated since 8.7.2 and will be removed in a future major release. Holding many backtests in the parent process is memory-unsafe — each Backtest carries portfolio snapshots, trades and timeseries, so a list of a few thousand backtests can easily consume tens of GB before any work starts. Use recalculate_backtests_in_directory(src_dir, ...) instead.
:::
Save the report as a standalone HTML file you can share or open later:
report = BacktestReport(backtests=[backtest_a, backtest_b])
report.save("strategy_comparison.html")The output is a single .html file with all CSS, JavaScript, and data embedded — no server or internet connection needed to view it.
show() automatically detects Jupyter notebooks and renders the dashboard inline:
# In a Jupyter notebook cell:
report = BacktestReport.open(directory_path="./backtests")
report.show() # Renders inline in the notebook
report.show(browser=True) # Also opens in the browser- KPI cards: Best CAGR, best Sharpe, lowest max drawdown (with dual values when a window is selected)
- Window Coverage: Strategy × window matrix showing data coverage
- Key Metrics table: Sortable ranking with CAGR, Sharpe, Sortino, Calmar, Max DD, Volatility, Recovery Factor, Net Gain %
- Trading Activity table: Profit Factor, Win Rate, Trades/yr, Trades/mo, Trades/wk, # Trades, Avg Return, Avg Duration
- Return Scenarios: Good/Average/Bad/Very Bad Year projections based on CAGR ± volatility
- Equity curves: Normalized percentage growth overlay
- Collapsible cards: All chart sections can be collapsed/expanded
Each strategy gets a dedicated page with three tabs:
| Tab | Contents |
|---|---|
| Summary | Full KPI grid (CAGR, Sharpe, Sortino, Calmar, Max DD, Profit Factor, Win Rate, Volatility, Recovery Factor, etc.) |
| Runs | Backtest run comparison table, equity overlay across runs |
| Performance | Monthly returns heatmap, yearly returns bar chart, return distribution |
Use the run selector pills to switch between summary view and individual backtest runs.
Open the strategy selection modal to pick strategies for comparison. You can set a challenger strategy for highlighting. The compare page includes:
- Key Metrics and Trading Activity ranking tables
- Return Scenarios projections
- Monthly Returns with four view modes (Returns/Growth × Rows/Heatmap), plus a year filter
- Side-by-side equity curves and drawdown overlays
- Metric bar charts (CAGR, Sharpe, Sortino, Calmar, Max DD, Win Rate, Profit Factor)
- Return distribution histograms and correlation matrix
- Rolling Sharpe ratio chart
- Yearly returns bar charts
The page title bar with the window selector stays visible as you scroll.
Toggle between dark and light mode using the sun icon in the top-right corner.
from datetime import datetime, timezone
from investing_algorithm_framework import (
create_app, BacktestDateRange, BacktestReport,
recalculate_backtests_in_directory,
)
app = create_app()
# ... configure strategies, market, portfolio ...
# Run backtests across multiple time periods
date_ranges = [
BacktestDateRange(
start_date=datetime(2022, 1, 1, tzinfo=timezone.utc),
end_date=datetime(2022, 12, 31, tzinfo=timezone.utc),
name="2022"
),
BacktestDateRange(
start_date=datetime(2023, 1, 1, tzinfo=timezone.utc),
end_date=datetime(2023, 12, 31, tzinfo=timezone.utc),
name="2023"
),
]
app.run_vector_backtests(
strategies=my_strategies,
backtest_date_ranges=date_ranges,
initial_amount=1000,
backtest_storage_directory="./backtests"
)
# Optional: recalculate metrics with updated calculations (memory-safe, on disk)
recalculate_backtests_in_directory("./backtests", risk_free_rate=0.04)
# Generate and save the comparison report
report = BacktestReport.open(directory_path="./backtests")
report.save("comparison_report.html")
report.show(browser=True)| Method | Description |
|---|---|
BacktestReport(backtests=[...]) |
Create a report from one or more Backtest objects |
BacktestReport(backtest) |
Create a report from a single Backtest (backward compatible) |
BacktestReport.open(directory_path=..., backtests=[...]) |
Load backtests from disk and/or combine with in-memory backtests |
report.show(browser=False) |
Display the report. In Jupyter: renders inline. Otherwise: opens browser. Set browser=True to force browser. |
report.save(path) |
Save the report as a self-contained HTML file |
Stream-recalculates every backtest bundle on disk inside worker processes. The full Backtest never crosses the process boundary, so parent memory stays flat.
| Parameter | Type | Description |
|---|---|---|
src_dir |
str | Path |
Directory containing .iafbt bundles (and/or legacy backtest directories) |
dst_dir |
str | Path, optional |
Output directory. If None, bundles are rewritten in place inside src_dir |
risk_free_rate |
float, optional |
Override risk-free rate. If None, uses each backtest's stored rate |
metrics |
List[str], optional |
Specific metrics to compute. If None, computes all default metrics |
workers |
int, optional |
Number of parallel worker processes. Defaults to min(8, cpu_count). Pass 1 for serial |
show_progress |
bool |
Display a tqdm progress bar (default False) |
include_ohlcv |
bool |
Re-emit attached OHLCV data with the bundle (default False) |
max_tasks_per_child |
int, optional |
Recycle each worker after this many tasks so RSS stays bounded (default 16) |
update_index |
bool |
Rewrite index.parquet in the destination directory (default True) |
Returns: int — the number of backtests recalculated.
:::warning Deprecated since 8.7.2
Use recalculate_backtests_in_directory instead. This in-memory variant will be removed in a future major release.
:::
| Parameter | Type | Description |
|---|---|---|
backtests |
List[Backtest] |
The backtests to recalculate (mutated in place and returned) |
risk_free_rate |
float, optional |
Override risk-free rate. If None, uses each backtest's stored rate (falls back to 0.0) |
metrics |
List[str], optional |
Specific metrics to compute. If None, computes all default metrics |
Returns: List[Backtest] — the same backtest objects with updated metrics.