forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 30
Pull requests: PrismML-Eng/llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
opencl: Q1_0 support first attempt
ggml
OpenCL
#25
opened Apr 15, 2026 by
khosravipasha
Collaborator
•
Draft
(Performance) Optimized x86 and generic q1_0(_g128) dot
ggml
#10
opened Apr 3, 2026 by
pl752
Loading…
feat: port TQ3_0 KV cache from llama-turboquant
examples
ggml
Nvidia GPU
#2
opened Apr 1, 2026 by
carlosfundora
Loading…
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.