Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude/commands/edit-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,4 @@ uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2

## Validation

Run `just actionlint` before committing to validate YAML syntax and structure.
Run `just lint-actions` before committing to validate YAML syntax and structure.
8 changes: 4 additions & 4 deletions .github/workflows/benchmark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ jobs:
type: benchmark

- name: Run benchmark unit tests
run: just tests_benchmark_pytest_py3
run: just test-tests-bench
env:
EVM_BIN: ${{ steps.evm-builder.outputs.evm-bin }}

Expand All @@ -57,11 +57,11 @@ jobs:
matrix:
include:
- name: Benchmark Gas Values
recipe: benchmark-gas-values
recipe: bench-gas
- name: Fixed Opcode Count CLI
recipe: benchmark-fixed-opcode-cli
recipe: bench-opcode
- name: Fixed Opcode Count Config
recipe: benchmark-fixed-opcode-config
recipe: bench-opcode-config
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/gh-pages.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,16 +35,16 @@ jobs:

- name: Build Documentation
run: |
just spec-docs
touch .just/spec-docs/.nojekyll
just docs-spec
touch .just/docs-spec/.nojekyll
env:
DOCC_SKIP_DIFFS: ${{ case(github.event_name == 'push' && github.ref_name == github.event.repository.default_branch, '', '1') }}

- name: Upload Pages Artifact
id: artifact
uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # v4.0.0
with:
path: .just/spec-docs
path: .just/docs-spec

deploy:
needs: build
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/test-checklist.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,4 @@ jobs:
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: ./.github/actions/setup-uv
- name: Run checklist consistency test
run: just tests_pytest_py3 -k test_checklist_template_consistency
run: just test-tests -k test_checklist_template_consistency
2 changes: 1 addition & 1 deletion .github/workflows/test-docs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
- name: Run changelog validation
run: just changelog

markdownlint:
lint-md:
name: Lint markdown files with markdownlint
runs-on: ubuntu-latest
steps:
Expand Down
24 changes: 12 additions & 12 deletions .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ jobs:
flags: unittests
token: ${{ secrets.CODECOV_TOKEN }}

pypy3:
fill-pypy:
runs-on: [self-hosted-ghr, size-xl-x64]
needs: static
steps:
Expand All @@ -126,13 +126,13 @@ jobs:
with:
python-version: "pypy3.11"
- uses: ./.github/actions/setup-env
- name: Run pypy3 tests
run: just pypy3
- name: Run fill-pypy tests
run: just fill-pypy
env:
PYPY_GC_MAX: "2G"
PYPY_GC_MIN: "1G"

json_loader:
json-loader:
runs-on: [self-hosted-ghr, size-xl-x64]
needs: static
steps:
Expand All @@ -141,12 +141,12 @@ jobs:
submodules: recursive
- uses: ./.github/actions/setup-uv
- uses: ./.github/actions/setup-env
- name: Fill and run json_loader tests
run: just json_loader
- name: Fill and run json-loader tests
run: just json-loader
env:
PYTEST_XDIST_AUTO_NUM_WORKERS: auto

tests_pytest_py3:
test-tests:
runs-on: [self-hosted-ghr, size-xl-x64]
needs: static
steps:
Expand All @@ -156,12 +156,12 @@ jobs:
- uses: ./.github/actions/setup-uv
- uses: ./.github/actions/setup-env
- uses: ./.github/actions/build-evmone
- name: Run py3 tests
run: just tests_pytest_py3
- name: Run test-tests
run: just test-tests
env:
PYTEST_XDIST_AUTO_NUM_WORKERS: auto

tests_pytest_pypy3:
test-tests-pypy:
runs-on: [self-hosted-ghr, size-xl-x64]
needs: static
steps:
Expand All @@ -173,8 +173,8 @@ jobs:
python-version: "pypy3.11"
- uses: ./.github/actions/setup-env
- uses: ./.github/actions/build-evmone
- name: Run pypy3 tests
run: just tests_pytest_pypy3
- name: Run test-tests-pypy
run: just test-tests-pypy
env:
PYPY_GC_MAX: "2G"
PYPY_GC_MIN: "1G"
26 changes: 25 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,30 @@ just typecheck --warn-unreachable
> export MYPY_CACHE_DIR=~/path/to/execution-specs/.mypy_cache
> ```

### Shell Auto-Completion

`just` provides tab-completion for recipe names and arguments. To enable it for your shell:

**Bash** β€” add to `~/.bashrc`:

```bash
eval "$(just --completions bash)"
```

**Zsh** β€” add to `~/.zshrc`:

```bash
eval "$(just --completions zsh)"
```

**Fish** β€” run once (fish auto-loads completions from this directory):

```bash
just --completions fish > ~/.config/fish/completions/just.fish
```

After restarting your shell (or sourcing the config), `just <Tab>` will complete recipe names.

A trace of the EVM execution for any test case can be obtained by providing the `--evm-trace` argument to pytest.
Note: Make sure to run the EVM trace on a small number of tests at a time. The log might otherwise get very big.
Below is an example.
Expand Down Expand Up @@ -285,4 +309,4 @@ The tool currently performs the following checks
- The order of the identifiers between each hardfork is consistent.
- Import statements follow the relevant import rules in modules.

The command to run the tool is `just ethereum-spec-lint` (or `uv run ethereum-spec-lint`).
The command to run the tool is `just lint-spec` (or `uv run ethereum-spec-lint`).
84 changes: 42 additions & 42 deletions Justfile
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ fix:

# Run all static checks (spellcheck, lint, format, mypy, ...)
[group('static analysis'), parallel]
static: typecheck ethereum-spec-lint spellcheck actionlint lock-check format-check lint
static: typecheck lint-spec spellcheck lint-actions lock-check format-check lint

# Check spelling
[group('static analysis')]
Expand Down Expand Up @@ -63,7 +63,7 @@ typecheck *args:

# Check EELS import isolation
[group('static analysis')]
ethereum-spec-lint:
lint-spec:
uv run ethereum-spec-lint

# Verify uv.lock is up to date
Expand All @@ -81,7 +81,7 @@ lock-check:

# Lint GitHub Actions workflows
[group('static analysis')]
actionlint:
lint-actions:
uv run actionlint -pyflakes pyflakes -shellcheck "shellcheck -S warning"

# Generate HTML coverage report from last just fill run
Expand Down Expand Up @@ -116,11 +116,11 @@ fill *args:

# Fill the base coverage consensus tests using EELS with PyPy
[group('integration tests')]
pypy3 *args:
@mkdir -p "{{ output_dir }}/pypy3/tmp" "{{ output_dir }}/pypy3/logs"
fill-pypy *args:
@mkdir -p "{{ output_dir }}/fill-pypy/tmp" "{{ output_dir }}/fill-pypy/logs"
uv run --python pypy3.11 fill \
--skip-index \
--output="{{ output_dir }}/pypy3/fixtures" \
--output="{{ output_dir }}/fill-pypy/fixtures" \
--no-html \
--tb=long \
-ra \
Expand All @@ -129,8 +129,8 @@ pypy3 *args:
-m "eels_base_coverage and not derived_test" \
-n auto --maxprocesses 7 \
--dist=loadgroup \
--basetemp="{{ output_dir }}/pypy3/tmp" \
--log-to "{{ output_dir }}/pypy3/logs" \
--basetemp="{{ output_dir }}/fill-pypy/tmp" \
--log-to "{{ output_dir }}/fill-pypy/logs" \
--clean \
--until "{{ latest_fork }}" \
--ignore=tests/ported_static \
Expand All @@ -141,103 +141,103 @@ pypy3 *args:

# Fill the base coverage consensus tests and run EELS against the fixtures
[group('integration tests')]
json_loader *args:
@mkdir -p "{{ output_dir }}/json_loader/tmp"
json-loader *args:
@mkdir -p "{{ output_dir }}/json-loader/tmp"
uv run fill \
-m "eels_base_coverage and not derived_test" \
--until "{{ latest_fork }}" \
-n {{ xdist_workers }} --dist=loadgroup \
--skip-index \
--clean \
--ignore=tests/ported_static \
--output="{{ output_dir }}/json_loader/fixtures" \
--output="{{ output_dir }}/json-loader/fixtures" \
--cov-config=pyproject.toml \
--cov=ethereum \
--cov-fail-under=85
uv run pytest \
-m "not slow" \
-n auto --maxprocesses 6 --dist=loadfile \
--basetemp="{{ output_dir }}/json_loader/tmp" \
--basetemp="{{ output_dir }}/json-loader/tmp" \
"$@" \
tests/json_loader \
"{{ output_dir }}/json_loader/fixtures"
"{{ output_dir }}/json-loader/fixtures"

# --- Unit Tests ---

# Run the testing package unit tests (with Python)
[group('unit tests')]
tests_pytest_py3 *args:
@mkdir -p "{{ output_dir }}/tests_pytest_py3/tmp"
test-tests *args:
@mkdir -p "{{ output_dir }}/test-tests/tmp"
cd packages/testing && uv run pytest \
-n {{ xdist_workers }} \
--basetemp="{{ output_dir }}/tests_pytest_py3/tmp" \
--basetemp="{{ output_dir }}/test-tests/tmp" \
--ignore=src/execution_testing/cli/pytest_commands/plugins/filler/tests/test_benchmarking.py \
"$@" \
src

# Run the testing package unit tests (with PyPy)
[group('unit tests')]
tests_pytest_pypy3 *args:
@mkdir -p "{{ output_dir }}/tests_pytest_pypy3/tmp"
test-tests-pypy *args:
@mkdir -p "{{ output_dir }}/test-tests-pypy/tmp"
cd packages/testing && uv run --python pypy3.11 pytest \
-n auto --maxprocesses 6 \
--basetemp="{{ output_dir }}/tests_pytest_pypy3/tmp" \
--basetemp="{{ output_dir }}/test-tests-pypy/tmp" \
--ignore=src/execution_testing/cli/pytest_commands/plugins/filler/tests/test_benchmarking.py \
"$@" \
src

# Run benchmark framework unit tests (with Python)
[group('unit tests')]
[group('benchmark tests')]
tests_benchmark_pytest_py3 *args:
@mkdir -p "{{ output_dir }}/tests_benchmark_pytest_py3/tmp"
test-tests-bench *args:
@mkdir -p "{{ output_dir }}/test-tests-bench/tmp"
uv run pytest \
--basetemp="{{ output_dir }}/tests_benchmark_pytest_py3/tmp" \
--basetemp="{{ output_dir }}/test-tests-bench/tmp" \
"$@" \
packages/testing/src/execution_testing/cli/pytest_commands/plugins/filler/tests/test_benchmarking.py

# --- Benchmarks ---

# Fill benchmark tests with --gas-benchmark-values
[group('benchmark tests')]
benchmark-gas-values *args:
@mkdir -p "{{ output_dir }}/benchmark-gas-values/tmp" "{{ output_dir }}/benchmark-gas-values/logs"
bench-gas *args:
@mkdir -p "{{ output_dir }}/bench-gas/tmp" "{{ output_dir }}/bench-gas/logs"
uv run fill \
--evm-bin="{{ evm_bin }}" \
--gas-benchmark-values 1 \
--generate-pre-alloc-groups \
--fork Osaka \
-m "not slow" \
-n auto --maxprocesses 10 --dist=loadgroup \
--output="{{ output_dir }}/benchmark-gas-values/fixtures" \
--basetemp="{{ output_dir }}/benchmark-gas-values/tmp" \
--log-to "{{ output_dir }}/benchmark-gas-values/logs" \
--output="{{ output_dir }}/bench-gas/fixtures" \
--basetemp="{{ output_dir }}/bench-gas/tmp" \
--log-to "{{ output_dir }}/bench-gas/logs" \
--clean \
"$@" \
tests/benchmark/compute

# Fill benchmark tests with --fixed-opcode-count 1
[group('benchmark tests')]
benchmark-fixed-opcode-cli *args:
@mkdir -p "{{ output_dir }}/benchmark-fixed-opcode-cli/tmp" "{{ output_dir }}/benchmark-fixed-opcode-cli/logs"
bench-opcode *args:
@mkdir -p "{{ output_dir }}/bench-opcode/tmp" "{{ output_dir }}/bench-opcode/logs"
uv run fill \
--evm-bin="{{ evm_bin }}" \
--fixed-opcode-count 1 \
--fork Osaka \
-m repricing \
-n auto --maxprocesses 10 --dist=loadgroup \
-k "not test_alt_bn128 and not test_bls12_381 and not test_modexp" \
--output="{{ output_dir }}/benchmark-fixed-opcode-cli/fixtures" \
--basetemp="{{ output_dir }}/benchmark-fixed-opcode-cli/tmp" \
--log-to "{{ output_dir }}/benchmark-fixed-opcode-cli/logs" \
--output="{{ output_dir }}/bench-opcode/fixtures" \
--basetemp="{{ output_dir }}/bench-opcode/tmp" \
--log-to "{{ output_dir }}/bench-opcode/logs" \
--clean \
"$@" \
tests/benchmark/compute

# Run benchmark_parser, then fill benchmark tests using its config
[group('benchmark tests')]
benchmark-fixed-opcode-config *args:
@mkdir -p "{{ output_dir }}/benchmark-fixed-opcode-config/tmp" "{{ output_dir }}/benchmark-fixed-opcode-config/logs"
bench-opcode-config *args:
@mkdir -p "{{ output_dir }}/bench-opcode-config/tmp" "{{ output_dir }}/bench-opcode-config/logs"
uv run benchmark_parser
uv run fill \
--evm-bin="{{ evm_bin }}" \
Expand All @@ -246,9 +246,9 @@ benchmark-fixed-opcode-config *args:
-m repricing \
-n auto --maxprocesses 10 --dist=loadgroup \
-k "not test_alt_bn128 and not test_bls12_381 and not test_modexp" \
--output="{{ output_dir }}/benchmark-fixed-opcode-config/fixtures" \
--basetemp="{{ output_dir }}/benchmark-fixed-opcode-config/tmp" \
--log-to "{{ output_dir }}/benchmark-fixed-opcode-config/logs" \
--output="{{ output_dir }}/bench-opcode-config/fixtures" \
--basetemp="{{ output_dir }}/bench-opcode-config/tmp" \
--log-to "{{ output_dir }}/bench-opcode-config/logs" \
--clean \
"$@" \
tests/benchmark/compute
Expand All @@ -257,9 +257,9 @@ benchmark-fixed-opcode-config *args:

# Generate documentation for EELS using docc
[group('docs')]
spec-docs:
uv run docc --output "{{ output_dir }}/spec-docs"
uv run python -c 'import pathlib; print("documentation available under file://{0}".format(pathlib.Path(r"{{ output_dir }}") / "spec-docs" / "index.html"))'
docs-spec:
uv run docc --output "{{ output_dir }}/docs-spec"
uv run python -c 'import pathlib; print("documentation available under file://{0}".format(pathlib.Path(r"{{ output_dir }}") / "docs-spec" / "index.html"))'

# Build HTML site documentation with mkdocs
[group('docs')]
Expand All @@ -268,7 +268,7 @@ docs:

# Build HTML site documentation with mkdocs (skip test case reference)
[group('docs')]
fast-docs:
docs-fast:
FAST_DOCS=True GEN_TEST_DOC_VERSION="local" DYLD_FALLBACK_LIBRARY_PATH="/opt/homebrew/lib" uv run mkdocs build --strict -d "{{ output_dir }}/docs/site"

# Validate docs/CHANGELOG.md entries
Expand All @@ -278,7 +278,7 @@ changelog:

# Lint markdown files (markdownlint)
[group('docs')]
markdownlint:
lint-md:
uv run markdownlintcli2_soft_fail

# --- Housekeeping ---
Expand Down
Loading
Loading