This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
c959b676be
llama.cpp
/
ggml
History
Johannes Gäßler
c959b676be
CUDA: fix FA occupancy, optimize tile kernel (
#15982
)
2025-09-17 15:32:42 +02:00
..
cmake
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (
#15094
)
2025-08-07 13:45:41 +02:00
include
ggml-zdnn:
fix
#15414
, activate FP16 and BF16 acceleration and incorrect zTensor free (
#15839
)
2025-09-13 02:39:52 +08:00
src
CUDA: fix FA occupancy, optimize tile kernel (
#15982
)
2025-09-17 15:32:42 +02:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml-cpu: drop support for nnpa intrinsics (
#15821
)
2025-09-06 11:27:28 +08:00