This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
520ffce162
llama.cpp
/
docs
/
backend
History
Neo Zhang
520ffce162
Merge
689428949b
into
3bc8d2cf23
2026-02-01 22:46:04 +00:00
..
snapdragon
Bump cmake max version (needed for Windows on Snapdragon builds) (
#19188
)
2026-02-01 14:13:38 -08:00
BLIS.md
make : deprecate (
#10514
)
2024-12-02 21:22:53 +02:00
CANN.md
CANN: add operator fusion support for ADD + RMS_NORM (
#17512
)
2026-01-05 15:38:18 +08:00
CUDA-FEDORA.md
docs: update: improve the Fedoa CUDA guide (
#12536
)
2025-03-24 11:02:26 +00:00
OPENCL.md
docs: add linux to index (
#18907
)
2026-01-18 18:03:35 +08:00
SYCL.md
Remove support for Nvidia & AMD GPU, because the oneAPI plugin for Nvidia & AMD GPU is unavailable: download/installation channels are out of work.
2026-02-01 21:07:44 +08:00
ZenDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00
zDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00