This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
207786a760
llama.cpp
/
ggml
History
chraac
207786a760
Merge tag 'b7588' into dev-fix-model-load-error
2025-12-31 14:27:58 +08:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (
#16653
)
2025-12-15 09:24:59 +01:00
src
Merge tag 'b7588' into dev-fix-model-load-error
2025-12-31 14:27:58 +08:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
cmake: Added more x86_64 CPU backends when building with `GGML_CPU_ALL_VARIANTS=On` (
#18186
)
2025-12-28 09:33:29 +02:00