This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
0aae7d78c7
llama.cpp
/
include
History
Nikodem Eluszkiewicz
f96df927eb
ggml : limit the first TurboQuant CPU PR to TBQ
2026-03-27 23:57:43 +01:00
..
llama-cpp.h
llama : re-enable manual LoRA adapter free (
#19983
)
2026-03-18 12:03:26 +02:00
llama.h
ggml : limit the first TurboQuant CPU PR to TBQ
2026-03-27 23:57:43 +01:00