|
| 1 | +# Created with komac v2.15.0 |
| 2 | +# yaml-language-server: $schema=https://aka.ms/winget-manifest.defaultLocale.1.12.0.schema.json |
| 3 | + |
| 4 | +PackageIdentifier: ggml.llamacpp |
| 5 | +PackageVersion: b8334 |
| 6 | +PackageLocale: en-US |
| 7 | +Publisher: ggml |
| 8 | +PublisherUrl: https://github.com/ggml-org |
| 9 | +PublisherSupportUrl: https://github.com/ggml-org/llama.cpp/issues |
| 10 | +PackageName: llama.cpp |
| 11 | +PackageUrl: https://github.com/ggml-org/llama.cpp |
| 12 | +License: MIT |
| 13 | +LicenseUrl: https://github.com/ggml-org/llama.cpp/blob/HEAD/LICENSE |
| 14 | +ShortDescription: LLM inference in C/C++ |
| 15 | +Tags: |
| 16 | +- ggml |
| 17 | +- llama |
| 18 | +ReleaseNotes: |- |
| 19 | + tools : enable kvu in perplexity for hellaswag, winogrande, multiple-choice (#19954) |
| 20 | + llama-perplexity -hf unsloth/Qwen3-0.6B-GGUF:Q4_K_M -f winogrande-debiased-eval.csv --winogrande |
| 21 | + winogrande_score : tokenizing selected tasks |
| 22 | + winogrande_score : calculating winogrande score over selected tasks. |
| 23 | + split_equal: sequential split is not supported when there are coupled sequences in the input batch (you may need to use the -kvu flag) |
| 24 | + decode: failed to find a memory slot for batch of size 46 |
| 25 | + failed to decode the batch, n_batch = 2048, ret = 1 |
| 26 | + winogrande_score: llama_decode() failed |
| 27 | + same for hellaswag: |
| 28 | + split_equal: sequential split is not supported when there are coupled sequences in the input batch (you may need to use the -kvu flag) |
| 29 | + decode: failed to find a memory slot for batch of size 99 |
| 30 | + failed to decode the batch, n_batch = 2048, ret = 1 |
| 31 | + hellaswag_score: llama_decode() failed |
| 32 | + Signed-off-by: Adrien Gallouët angt@huggingface.co |
| 33 | + macOS/iOS: |
| 34 | + - macOS Apple Silicon (arm64) |
| 35 | + - macOS Intel (x64) |
| 36 | + - iOS XCFramework |
| 37 | + Linux: |
| 38 | + - Ubuntu x64 (CPU) |
| 39 | + - Ubuntu x64 (Vulkan) |
| 40 | + - Ubuntu x64 (ROCm 7.2) |
| 41 | + - Ubuntu s390x (CPU) |
| 42 | + Windows: |
| 43 | + - Windows x64 (CPU) |
| 44 | + - Windows arm64 (CPU) |
| 45 | + - Windows x64 (CUDA 12) - CUDA 12.4 DLLs |
| 46 | + - Windows x64 (CUDA 13) - CUDA 13.1 DLLs |
| 47 | + - Windows x64 (Vulkan) |
| 48 | + - Windows x64 (SYCL) |
| 49 | + - Windows x64 (HIP) |
| 50 | + openEuler: |
| 51 | + - openEuler x86 (310p) |
| 52 | + - openEuler x86 (910b, ACL Graph) |
| 53 | + - openEuler aarch64 (310p) |
| 54 | + - openEuler aarch64 (910b, ACL Graph) |
| 55 | +ReleaseNotesUrl: https://github.com/ggml-org/llama.cpp/releases/tag/b8334 |
| 56 | +Documentations: |
| 57 | +- DocumentLabel: Wiki |
| 58 | + DocumentUrl: https://github.com/ggml-org/llama.cpp/wiki |
| 59 | +ManifestType: defaultLocale |
| 60 | +ManifestVersion: 1.12.0 |
0 commit comments