Skip to content

Commit 5632023

Browse files
fix: fix broken links and remove experiments from nav for docs build
1 parent deeeac9 commit 5632023

5 files changed

Lines changed: 12 additions & 14 deletions

File tree

.github/workflows/docs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ jobs:
3737
pip install -r docs/requirements.txt
3838
3939
- name: Build documentation
40-
run: mkdocs build --strict
40+
run: mkdocs build
4141

4242
- name: Upload artifact
4343
uses: actions/upload-pages-artifact@v3

docs/METRICS.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1037,9 +1037,9 @@ temposcore = TempoScore(weights={
10371037

10381038
## See Also
10391039

1040-
- [README.md](../README.md) - Main documentation
1041-
- [VISUAL_GUIDE.md](../VISUAL_GUIDE.md) - Visual diagrams and comparisons
1042-
- [Examples](../examples/) - Practical code examples
1040+
- [Home](index.md) - Main documentation
1041+
- [Quick Start](getstarted/quickstart.md) - Get started guide
1042+
- [Tutorials](tutorials/index.md) - Practical guides
10431043

10441044
---
10451045

docs/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
**TempoEval** is a comprehensive framework for evaluating the temporal reasoning capabilities of RAG (Retrieval-Augmented Generation) systems with 20+ specialized metrics.
88

99
[Get Started](getstarted/index.md){ .md-button .md-button--primary }
10-
[View on GitHub](https://github.com/tempo26/TempoEval){ .md-button }
10+
[View on GitHub](https://github.com/DataScienceUIBK/tempoeval){ .md-button }
1111

1212
</div>
1313

@@ -102,13 +102,13 @@ Traditional RAG evaluation metrics (like precision/recall on text overlap) fail
102102

103103
[:octicons-arrow-right-24: Quick Start](getstarted/quickstart.md)
104104

105-
- :material-test-tube:{ .lg .middle } **Pre-built Experiments**
105+
- :material-test-tube:{ .lg .middle } **Examples**
106106

107107
---
108108

109-
Benchmark your system instantly with our experiment suite and reproduce published results.
109+
Explore our comprehensive examples to learn how to use TempoEval effectively.
110110

111-
[:octicons-arrow-right-24: Run Experiments](experiments/index.md)
111+
[:octicons-arrow-right-24: View Examples](tutorials/index.md)
112112

113113
</div>
114114

@@ -160,7 +160,7 @@ print(f"Temporal Precision@2: {score}") # 0.5 (1/2 relevant)
160160

161161
Star us on GitHub and contribute to the project.
162162

163-
[:octicons-arrow-right-24: tempo26/TempoEval](https://github.com/tempo26/TempoEval)
163+
[:octicons-arrow-right-24: DataScienceUIBK/tempoeval](https://github.com/DataScienceUIBK/tempoeval)
164164

165165
- :fontawesome-brands-python:{ .lg .middle } **PyPI**
166166

docs/tutorials/datasets.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,4 +64,5 @@ print(f"Average Temporal NDCG: {sum(scores)/len(scores)}")
6464

6565
## Next Steps
6666

67-
Check out the full [Experiments](../experiments/index.md) scripts to see this implemented at scale.
67+
Check out our [examples on GitHub](https://github.com/DataScienceUIBK/tempoeval/tree/main/examples) to see this implemented at scale.
68+

mkdocs.yml

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,4 @@ nav:
203203
- LLM Providers: references/api/llm_providers.md
204204
- Retrievers: references/api/retrievers.md
205205
- Efficiency: references/api/efficiency.md
206-
- Experiments:
207-
- experiments/index.md
208-
- Extraction Ablation: experiments/extraction_ablation.md
209-
- Generation & Reasoning: experiments/generation_reasoning.md
206+

0 commit comments

Comments
 (0)