Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
212 changes: 212 additions & 0 deletions .github/workflows/perf-gate.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,212 @@
name: Scrolling Performance Gate

on:
push:
branches: [ main, develop ]
paths-ignore:
- '**.md'
pull_request:
branches: [ main, develop ]
paths-ignore:
- '**.md'

# Only run on Linux to keep results comparable across runs.
# Windows/macOS times vary too much to use as a performance baseline.
permissions:
contents: read

jobs:
perf-smoke-tests:
name: Performance Smoke Tests (Linux)
runs-on: ubuntu-latest
timeout-minutes: 20
env:
DisableRealDriverIO: "1"

steps:
- name: Checkout code
uses: actions/checkout@v6
with:
fetch-depth: 0 # GitVersion needs full history

- name: Setup .NET
uses: actions/setup-dotnet@v5
with:
dotnet-version: 10.x
dotnet-quality: 'ga'

- name: Restore dependencies
run: dotnet restore

- name: Build (Release)
run: dotnet build --configuration Release --no-restore -property:NoWarn=0618%3B0612

- name: Build Tests (Debug — smoke tests run in Debug to match CI unit tests)
run: dotnet build Tests/PerformanceTests --no-restore -property:NoWarn=0618%3B0612

- name: Run performance smoke tests (Layer 1 gate)
id: smoke_tests
run: |
dotnet test \
--project Tests/PerformanceTests \
--no-build \
--verbosity normal

- name: Upload smoke test logs
if: always()
uses: actions/upload-artifact@v7
with:
name: perf-smoke-test-logs
path: |
TestResults/
if-no-files-found: ignore
retention-days: 7

perf-benchmarks:
name: Scrolling Benchmarks (Linux, ShortRun)
runs-on: ubuntu-latest
# Only run on pushes to develop/main, not on every PR (slow and not blocking).
if: github.event_name == 'push'
timeout-minutes: 30
env:
DisableRealDriverIO: "1"

steps:
- name: Checkout code
uses: actions/checkout@v6
with:
fetch-depth: 0

- name: Setup .NET
uses: actions/setup-dotnet@v5
with:
dotnet-version: 10.x
dotnet-quality: 'ga'

- name: Restore dependencies
run: dotnet restore

- name: Build Release
run: dotnet build --configuration Release --no-restore -property:NoWarn=0618%3B0612

- name: Run scrolling benchmarks (ShortRun ≈ 30–60 s)
id: run_benchmarks
run: |
dotnet run \
--project Tests/Benchmarks \
--configuration Release \
--no-build \
-- \
--filter '*Scroll*' \
--job short \
--exporters json \
--artifacts ./BenchmarkResults
continue-on-error: true # Don't block the workflow; comparison step decides outcome

- name: Compare results to baseline
id: compare
run: |
python3 - << 'PYEOF'
import json, os, sys, glob

REGRESSION_FACTOR = 3.0 # Fail if any benchmark is >3× baseline
IMPROVEMENT_FACTOR = 0.8 # Celebrate 🎉 if any benchmark drops below 0.8× baseline

baseline_path = "Tests/Benchmarks/baseline.json"
results_dir = "BenchmarkResults"

# --- Load baseline ---
try:
with open(baseline_path) as f:
baseline_data = json.load(f)
baseline = {
f"{b['type']}/{b['method']}/{b['params']}": b["meanNs"]
for b in baseline_data["benchmarks"]
}
except FileNotFoundError:
print("::warning::baseline.json not found — skipping comparison")
sys.exit(0)

# --- Find BenchmarkDotNet JSON results ---
result_files = glob.glob(f"{results_dir}/**/*.json", recursive=True)
result_files = [f for f in result_files if "results" in f.lower() or "report" in f.lower()]
if not result_files:
print("::warning::No BenchmarkDotNet result files found — skipping comparison")
sys.exit(0)
Comment thread
tig marked this conversation as resolved.

# --- Parse results ---
results = {}
for fpath in result_files:
try:
with open(fpath) as f:
data = json.load(f)
for bm in data.get("Benchmarks", []):
key = f"{bm['Type']}/{bm['Method']}/{bm.get('Parameters', '')}"
results[key] = bm.get("Statistics", {}).get("Mean", None)
except Exception as e:
print(f"::warning::Could not parse {fpath}: {e}")

# --- Build comparison table ---
rows = []
regressions = []
improvements = []

for key, base_ns in baseline.items():
if base_ns <= 0:
continue
cur_ns = results.get(key)
if cur_ns is None:
rows.append(f"| {key} | {base_ns/1000:.1f} µs | — (not measured) | — |")
continue

ratio = cur_ns / base_ns
emoji = "✅"
if ratio >= REGRESSION_FACTOR:
emoji = "❌"
regressions.append((key, base_ns, cur_ns, ratio))
elif ratio <= IMPROVEMENT_FACTOR:
emoji = "🎉"
improvements.append((key, base_ns, cur_ns, ratio))
rows.append(
f"| {key} | {base_ns/1000:.1f} µs | {cur_ns/1000:.1f} µs | {ratio:.2f}× {emoji} |"
)

# --- Write step summary ---
summary = "## 📊 Scrolling Benchmark Comparison\n\n"
summary += "| Benchmark | Baseline | Current | Ratio |\n"
summary += "|-----------|----------|---------|-------|\n"
summary += "\n".join(rows) + "\n\n"

if improvements:
summary += "### 🎉 Performance Improvements\n"
for k, b, c, r in improvements:
summary += f"- **{k}**: {b/1000:.1f} µs → {c/1000:.1f} µs ({r:.2f}×)\n"
summary += "\n"

if regressions:
summary += "### ❌ Regressions Detected\n"
for k, b, c, r in regressions:
summary += f"- **{k}**: {b/1000:.1f} µs → {c/1000:.1f} µs ({r:.2f}×) — exceeds {REGRESSION_FACTOR}× threshold\n"
summary += "\n"

with open(os.environ.get("GITHUB_STEP_SUMMARY", "/dev/null"), "a") as f:
f.write(summary)

print(summary)

if regressions:
print(f"::error::Performance regressions detected: {len(regressions)} benchmark(s) exceeded {REGRESSION_FACTOR}× baseline")
sys.exit(1)

if improvements:
print(f"Performance improvements detected: {len(improvements)} benchmark(s) improved!")
PYEOF

- name: Upload benchmark results
if: always()
uses: actions/upload-artifact@v7
with:
name: benchmark-results-${{ github.sha }}
path: BenchmarkResults/
if-no-files-found: ignore
retention-days: 30
1 change: 1 addition & 0 deletions Terminal.Gui/Terminal.Gui.csproj
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@
<InternalsVisibleTo Include="UnitTests.Legacy" />
<InternalsVisibleTo Include="UnitTests.NonParallelizable" />
<InternalsVisibleTo Include="UnitTests.Parallelizable" />
<InternalsVisibleTo Include="PerformanceTests" />
<InternalsVisibleTo Include="StressTests" />
<InternalsVisibleTo Include="IntegrationTests" />
<InternalsVisibleTo Include="TerminalGuiDesigner" />
Expand Down
15 changes: 15 additions & 0 deletions Terminal.sln
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,8 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "InlineSelect", "Examples\In
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Terminal.Gui.Analyzers.Internal", "Terminal.Gui.Analyzers.Internal\Terminal.Gui.Analyzers.Internal.csproj", "{927CCC07-F00C-409C-BE42-458EB03DD4E8}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "PerformanceTests", "Tests\PerformanceTests\PerformanceTests.csproj", "{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Expand Down Expand Up @@ -454,6 +456,18 @@ Global
{927CCC07-F00C-409C-BE42-458EB03DD4E8}.Release|x64.Build.0 = Release|Any CPU
{927CCC07-F00C-409C-BE42-458EB03DD4E8}.Release|x86.ActiveCfg = Release|Any CPU
{927CCC07-F00C-409C-BE42-458EB03DD4E8}.Release|x86.Build.0 = Release|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Debug|Any CPU.Build.0 = Debug|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Debug|x64.ActiveCfg = Debug|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Debug|x64.Build.0 = Debug|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Debug|x86.ActiveCfg = Debug|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Debug|x86.Build.0 = Debug|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Release|Any CPU.ActiveCfg = Release|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Release|Any CPU.Build.0 = Release|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Release|x64.ActiveCfg = Release|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Release|x64.Build.0 = Release|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Release|x86.ActiveCfg = Release|Any CPU
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2}.Release|x86.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
Expand Down Expand Up @@ -484,6 +498,7 @@ Global
{90A42AE4-301D-4B05-8892-60BE5209C1B5} = {3DD033C0-E023-47BF-A808-9CCE30873C3E}
{70802F77-F259-44C6-9522-46FCE2FD754E} = {3DD033C0-E023-47BF-A808-9CCE30873C3E}
{3116547F-A8F2-4189-BC22-0B47C757164C} = {3DD033C0-E023-47BF-A808-9CCE30873C3E}
{6E98BACA-E6B6-47A0-B45C-B624F8E74EC2} = {A589126F-C71A-4FEE-B7EA-2DCA1ADF6A46}
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {9F8F8A4D-7B8D-4C2A-AC5E-CD7117F74C03}
Expand Down
87 changes: 80 additions & 7 deletions Tests/Benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ dotnet run -c Release
# Run only DimAuto benchmarks
dotnet run -c Release -- --filter '*DimAuto*'

# Run only Scrolling benchmarks
dotnet run -c Release -- --filter '*Scroll*'

# Run only TextFormatter benchmarks
dotnet run -c Release -- --filter '*TextFormatter*'
```
Expand All @@ -56,14 +59,17 @@ dotnet run -c Release -- --filter '*TextFormatter*'
```bash
# Run only the ComplexLayout benchmark
dotnet run -c Release -- --filter '*DimAutoBenchmark.ComplexLayout*'

# Run only TextView scrolling benchmarks
dotnet run -c Release -- --filter '*TextViewScroll*'
```

### Quick Run (Shorter but Less Accurate)

For faster iteration during development:

```bash
dotnet run -c Release -- --filter '*DimAuto*' -j short
dotnet run -c Release -- --filter '*Scroll*' -j short
```

### List Available Benchmarks
Expand All @@ -80,12 +86,52 @@ The `DimAutoBenchmark` class tests layout performance with `Dim.Auto()` in vario
- **ComplexLayout**: 20 subviews with mixed Pos/Dim types (tests iteration overhead)
- **DeeplyNestedLayout**: 5 levels of nested views with DimAuto (tests recursive performance)

## Scrolling Benchmarks

The `Scrolling/` directory contains end-to-end scrolling benchmarks that cover the full input → layout → draw pipeline.

### BaselineScrollBenchmark

Minimal `View` subclass with a large `ContentSize` and no rendering logic. Isolates framework scrolling overhead from any view-specific work.

- **ViewportScroll_Down / Up**: Direct viewport manipulation (no key injection). Measures pure framework overhead.
- **ViewportScroll_PageDown**: Viewport-sized jump.
- Parameterized by `ContentHeight` = [1 000, 10 000]

### TextViewScrollBenchmark

`TextView` with read-only content of 1 000 / 5 000 lines of ~80-char text.

- **ScrollDown_OneStep / ScrollUp_OneStep**: Single `Key.CursorDown` / `Key.CursorUp` injection. With the caret at the viewport boundary, every keystroke triggers a viewport scroll.
- **PageDown_OneStep**: Single `Key.PageDown` injection.
- Parameterized by `Lines` = [1 000, 5 000]

### ListViewScrollBenchmark

`ListView` with 1 000 / 10 000 string items.

- **ScrollDown_OneStep / ScrollUp_OneStep / PageDown_OneStep**
- Parameterized by `Items` = [1 000, 10 000]

### TableViewScrollBenchmark

`TableView` with 100 / 1 000 rows × 10 columns.

- **ScrollDown_OneStep / ScrollUp_OneStep / PageDown_OneStep / ScrollRight_OneStep**
- Parameterized by `Rows` = [100, 1 000]

### Run all scrolling benchmarks

```bash
dotnet run --project Tests/Benchmarks -c Release -- --filter '*Scroll*'
```

## Adding New Benchmarks

1. Create a new class in an appropriate subdirectory (e.g., `Layout/`, `Text/`, `ViewBase/`)
1. Create a new class in an appropriate subdirectory (e.g., `Layout/`, `Text/`, `ViewBase/`, `Scrolling/`)
2. For BenchmarkDotNet: add `[MemoryDiagnoser]`, `[BenchmarkCategory]`, `[Benchmark(Baseline = true)]`
3. For memory profilers: add a `public static void Run()` method and route it from `Program.cs`
4. Use `[GlobalSetup]`/`[GlobalCleanup]` for `Application.Init`/`Shutdown`
4. Use `[GlobalSetup]`/`[GlobalCleanup]` for application init/dispose

## Best Practices

Expand All @@ -96,10 +142,37 @@ The `DimAutoBenchmark` class tests layout performance with `Dim.Auto()` in vario

## Continuous Integration

Benchmarks are not run automatically in CI. Run them locally when:
- Making performance-critical changes
- Implementing performance optimizations
- Before releasing a new version
### Layer 1: Performance Smoke Tests

Stopwatch-based xUnit tests in `Tests/UnitTestsParallelizable/Views/ScrollingPerformanceTests.cs` run on every CI build via the standard unit test workflow. They use generous thresholds (50–100× typical) to catch catastrophic O(n²) regressions without flaking on slow runners.

Each test:
- Creates a large document (10 000 rows / 100 000 items)
- Measures the cost of a **single viewport draw** after scrolling to the mid-point
- Asserts completion under a generous threshold (e.g., < 200 ms for TableView)

This detects if a draw function accidentally iterates the entire document instead of just the visible viewport.

### Layer 2: Baseline Comparison

The `.github/workflows/perf-gate.yml` workflow runs on every push to `main` / `develop` (not PRs) and:

1. Runs the `*Scroll*` benchmarks with `--job short` (~30–60 s total)
2. Compares results to `Tests/Benchmarks/baseline.json`
3. **Fails** if any benchmark exceeds **3×** the baseline
4. **Celebrates** 🎉 if any benchmark drops below **0.8×** the baseline
5. Posts a markdown comparison table to the GitHub step summary

### Updating the Baseline

After a deliberate performance change, re-run the focused scrolling benchmarks, then update `baseline.json`:

```bash
# Run ShortRun and export JSON results
dotnet run --project Tests/Benchmarks -c Release -- --filter '*Scroll*' -j short --exporters json

# Inspect the JSON output in BenchmarkDotNet.Artifacts/ and update baseline.json
```

## Resources

Expand Down
Loading
Loading