This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
349a829756
llama.cpp
/
docs
/
backend
History
Kevin Pouget
48cdc36cf9
docs/ggml-virt: add link to testing + configuration
2026-02-05 16:22:37 +01:00
..
GGML-VirtGPU
docs/ggml-virt: add link to testing + configuration
2026-02-05 16:22:37 +01:00
snapdragon
Bump cmake max version (needed for Windows on Snapdragon builds) (
#19188
)
2026-02-01 14:13:38 -08:00
BLIS.md
make : deprecate (
#10514
)
2024-12-02 21:22:53 +02:00
CANN.md
CANN: add operator fusion support for ADD + RMS_NORM (
#17512
)
2026-01-05 15:38:18 +08:00
CUDA-FEDORA.md
docs: update: improve the Fedoa CUDA guide (
#12536
)
2025-03-24 11:02:26 +00:00
OPENCL.md
docs: add linux to index (
#18907
)
2026-01-18 18:03:35 +08:00
SYCL.md
Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work. (
#19246
)
2026-02-02 21:06:21 +08:00
ZenDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00
zDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00