|
| 1 | +# EvalScope Accuracy Evaluation Guide |
| 2 | + |
| 3 | +This test case is built upon **EvalScope (v1.5.2)** to provide automated evaluation capabilities, enabling convenient assessment of large language model performance on mainstream academic benchmarks and long-context retrieval tasks. |
| 4 | + |
| 5 | +## Supported Evaluation Types |
| 6 | + |
| 7 | +| Type | Description | Example Datasets | |
| 8 | +|------|-------------|------------------| |
| 9 | +| **Mainstream Benchmark Evaluation** | Standard question-answering tasks covering mathematics, reasoning, knowledge, coding, and more | `aime24`, `aime25`, `aime26`, `gsm8k`, `longbench_v2`, `ceval`, `cmmlu`, `humaneval`, `mmlu`, `mmlu_pro`, etc. | |
| 10 | +| **Needle In A Haystack** | Evaluates the model's ability to locate specific information within extremely long contexts | - | |
| 11 | + |
| 12 | +> **Note**: Except for the Needle In A Haystack test, only simple question-answering datasets are currently supported. Datasets requiring additional runtime environments or judge models are not yet adapted. |
| 13 | +
|
| 14 | +--- |
| 15 | + |
| 16 | +## Quick Start |
| 17 | + |
| 18 | +### 1. Environment Setup |
| 19 | + |
| 20 | +- It is recommended to use a virtual environment to install dependencies: |
| 21 | + ```bash |
| 22 | + cd test |
| 23 | + pip install -r requirements.txt |
| 24 | + ``` |
| 25 | + |
| 26 | +### 2. Dataset Preparation |
| 27 | + |
| 28 | +#### Online Environment (With Internet Access) |
| 29 | +- The framework will automatically download required datasets from ModelScope. **No manual operation is needed**. |
| 30 | + |
| 31 | +#### Offline Environment (No Internet Access) |
| 32 | +- Datasets must be downloaded in advance to a unified directory. |
| 33 | +- Ensure that subdirectory names exactly match the identifiers in the task list. |
| 34 | + |
| 35 | +**Method 1: Clone Individual Datasets** |
| 36 | +```bash |
| 37 | +git clone https://www.modelscope.cn/datasets/evalscope/aime26.git |
| 38 | +git clone https://www.modelscope.cn/datasets/ZhipuAI/LongBench-v2.git # Note: Rename the cloned directory to `longbench_v2` |
| 39 | +git clone https://www.modelscope.cn/datasets/AI-ModelScope/Needle-in-a-Haystack-Corpus.git |
| 40 | +``` |
| 41 | + |
| 42 | +**Method 2: Use the Pre-Packaged Dataset Archive** |
| 43 | +- Visit the [ModelScope Dataset Repository](https://modelscope.cn/datasets/keriko/UCM_tools/files/dataset) to download the complete archive and extract it to the target path. |
| 44 | + |
| 45 | +--- |
| 46 | + |
| 47 | +## Configuration |
| 48 | + |
| 49 | +### General Parameters |
| 50 | + |
| 51 | +| Environment Variable | Default | Description | |
| 52 | +|----------------------|---------|-------------| |
| 53 | +| `SCOPE_DATASET_ROOT` | | Root directory where datasets are stored | |
| 54 | +| `SCOPE_TREST_LIST` | `aime24,gsm8k` (example) | Comma-separated list of datasets to evaluate | |
| 55 | + |
| 56 | +### Needle In A Haystack Specific Parameters |
| 57 | + |
| 58 | +| Environment Variable | Default | Description | |
| 59 | +|----------------------|---------|-------------| |
| 60 | +| `SCOPE_NEEDLE_MIN` | `1000` | Minimum context length (in tokens) | |
| 61 | +| `SCOPE_NEEDLE_MAX` | `32000` | Maximum context length (in tokens) | |
| 62 | + |
| 63 | +### Local Manual Testing |
| 64 | +Directly modify the following constants in `test_evalscope.py`: |
| 65 | +```python |
| 66 | +DEFAULT_DATASET_ROOT = "/mnt/data/evalscope/dataset" # Dataset path; can be left empty in online environments |
| 67 | +DEFAULT_TASK_LIST = ["aime24", "gsm8k"] # Datasets to evaluate |
| 68 | +``` |
| 69 | + |
| 70 | +--- |
| 71 | + |
| 72 | +## Running Tests |
| 73 | + |
| 74 | +### Single Task Execution |
| 75 | + |
| 76 | +```bash |
| 77 | +cd test |
| 78 | + |
| 79 | +# Mainstream benchmark evaluation |
| 80 | +pytest suites/E2E/test_evalscope.py::test_eval_accuracy |
| 81 | + |
| 82 | +# Needle In A Haystack evaluation |
| 83 | +pytest suites/E2E/test_evalscope.py::test_needle_task |
| 84 | +``` |
| 85 | + |
| 86 | +### Batch Execution by Feature Tag |
| 87 | + |
| 88 | +```bash |
| 89 | +cd test |
| 90 | +pytest --feature=evalscope |
| 91 | +``` |
| 92 | + |
| 93 | +--- |
| 94 | + |
| 95 | +## Output and Results |
| 96 | + |
| 97 | +### 1. EvalScope Native Output |
| 98 | +All run records are saved under the `test/results/evalscope_outputs/` directory, organized into timestamped subdirectories, including: |
| 99 | +- Evaluation configuration files |
| 100 | +- Detailed request/response logs |
| 101 | +- Aggregated metrics files (JSON) |
| 102 | +- Visualization reports (HTML) |
| 103 | + |
| 104 | +For detailed format information, please refer to the [EvalScope Official Documentation](https://evalscope.readthedocs.io/). |
| 105 | + |
| 106 | +### 2. Database Persistence |
| 107 | +Evaluation results are automatically parsed and stored in the configured database backend for centralized querying and comparison. |
| 108 | + |
| 109 | +The following files are generated in the `test/results/` directory: |
| 110 | +- `eval_scope.jsonl` |
| 111 | +- `eval_scope.csv` |
| 112 | + |
| 113 | +To customize database connections, modify the `results` section in the configuration (PostgreSQL, MongoDB, etc. are supported): |
| 114 | + |
| 115 | +```yaml |
| 116 | +results: |
| 117 | + localFile: |
| 118 | + path: "./results" |
| 119 | + # postgresql: |
| 120 | + # host: "localhost" |
| 121 | + # ... |
| 122 | + # mongodb: |
| 123 | + # host: "127.0.0.1" |
| 124 | + # ... |
| 125 | +``` |
| 126 | + |
| 127 | +--- |
| 128 | + |
| 129 | +## Notes |
| 130 | + |
| 131 | +1. Some dataset names must strictly match the ModelScope repository names (e.g., `longbench_v2` instead of `LongBench-v2`). Pay attention to directory renaming when using offline mode. |
| 132 | +2. If using a remote API for evaluation, ensure that the `llm_connection` configuration is correct and the service is accessible (example: `http://127.0.0.1:8080/`). |
| 133 | +3. The Needle In A Haystack task uses the **model under test itself** as the judge model. Ensure that the model possesses basic instruction-following capabilities, and configure the model path as `tokenizer_path` in `llm_connection`. |
| 134 | + |
| 135 | +## Test Process |
| 136 | + |
| 137 | + |
| 138 | +## Test Result Example |
| 139 | +```json |
| 140 | +{ |
| 141 | + "aime25": { |
| 142 | + "pretty_name": "AIME-2025", |
| 143 | + "model": "Qwen3-32B", |
| 144 | + "score": 0.0, |
| 145 | + "metrics": [{ |
| 146 | + "name": "mean_acc", |
| 147 | + "score": 0.0, |
| 148 | + "macro_score": 0.0, |
| 149 | + "num": 30, |
| 150 | + "categories": [{ |
| 151 | + "name": ["default"], |
| 152 | + "score": 0.0, |
| 153 | + "macro_score": 0.0, |
| 154 | + "num": 30, |
| 155 | + "subsets": [{ |
| 156 | + "name": "default", |
| 157 | + "score": 0.0, |
| 158 | + "num": 30 |
| 159 | + }] |
| 160 | + }] |
| 161 | + }], |
| 162 | + "analysis": "N/A" |
| 163 | + }, |
| 164 | + "aime25.score": 0.0, |
| 165 | + "model_name": "Qwen3-32B", |
| 166 | + "test_id": "ad9ba909-1646-47b3-89d6-9240c6497593", |
| 167 | + "test_items": "pytestall_cases", |
| 168 | + "create_at": "2026-04-09 17:00:05.910252", |
| 169 | + "extra_info": "" |
| 170 | +} |
| 171 | +``` |
| 172 | + |
| 173 | +## HTML Test Report |
| 174 | + |
| 175 | + |
| 176 | +## Needle In A Haystack Heatmap |
| 177 | + |
| 178 | + |
| 179 | +*Note: The screenshots above were generated using a mock model for testing, hence all scores are zero.* |
0 commit comments