We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 48f2b67 commit 9a10e45Copy full SHA for 9a10e45
1 file changed
baselines/TimeMoE/README.md
@@ -2,7 +2,7 @@
2
3
```bash
4
# Activate your conda environment if applicable
5
-wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
+wget https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp311-cp311-linux_x86_64.whl --no-check-certificate
6
pip install flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
7
```
8
@@ -20,4 +20,4 @@ torch._dynamo.config.accumulated_cache_size_limit = 256
20
torch._dynamo.config.cache_size_limit = 256 # Increase if necessary
21
torch._dynamo.config.capture_scalar_outputs = True
22
torch._dynamo.config.optimize_ddp = False
23
-```
+```
0 commit comments