This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
64fe17fbb8
llama.cpp
/
ggml
History
Aman Gupta
64fe17fbb8
Revert "CUDA: add expert reduce kernel (
#16857
)" (
#17100
)
2025-11-08 21:05:19 +08:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
ggml-cpu : bicubic interpolation (
#16891
)
2025-11-04 13:12:20 +01:00
src
Revert "CUDA: add expert reduce kernel (
#16857
)" (
#17100
)
2025-11-08 21:05:19 +08:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml: disable vxe for cross-compilation by default (
#16966
)
2025-11-08 16:00:20 +08:00