This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
73955f7d2a
llama.cpp
/
ggml
History
Johannes Gäßler
73955f7d2a
CUDA: no FP16 arithmetic for vector FA kernel (
#17558
)
2025-11-28 10:29:09 +01:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
rpc : cache and reuse compute graphs (
#15405
)
2025-11-28 08:33:51 +00:00
src
CUDA: no FP16 arithmetic for vector FA kernel (
#17558
)
2025-11-28 10:29:09 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : remove dirty flag from version string (ggml/1391)
2025-11-24 15:26:31 +02:00