This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
a787084155
llama.cpp
/
docs
/
backend
History
Francisco Herrera
a787084155
clarify which steps
2025-12-14 09:43:25 -05:00
..
hexagon
Add experimental ggml-hexagon backend for the Hexagon NPU (
#16547
)
2025-10-22 13:47:09 -07:00
BLIS.md
make : deprecate (
#10514
)
2024-12-02 21:22:53 +02:00
CANN.md
CANN: GGML_CANN_ACL_GRAPH works only USE_ACL_GRAPH enabled (
#16861
)
2025-11-12 14:37:52 +08:00
CUDA-FEDORA.md
docs: update: improve the Fedoa CUDA guide (
#12536
)
2025-03-24 11:02:26 +00:00
OPENCL.md
clarify which steps
2025-12-14 09:43:25 -05:00
SYCL.md
sycl : support to malloc memory on device more than 4GB, update the doc and script (
#17566
)
2025-11-29 14:59:44 +02:00
ZenDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00
zDNN.md
ggml-zendnn : add ZenDNN backend for AMD CPUs (
#17690
)
2025-12-07 00:13:33 +08:00