This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
dd6e6d0b6a
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
0cc4m
10bb545c5b
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (
#14249
)
2025-06-19 09:15:42 +02:00
..
cmake
cmake: fix ggml-shaders-gen compiler paths containing spaces (
#12747
)
2025-04-04 10:12:40 -03:00
vulkan-shaders
cmake: clean up external project logic for vulkan-shaders-gen (
#14179
)
2025-06-16 10:32:13 -03:00
CMakeLists.txt
cmake: remove shader-gen step-targets from ggml-vulkan (
#14226
)
2025-06-17 22:33:25 +02:00
ggml-vulkan.cpp
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (
#14249
)
2025-06-19 09:15:42 +02:00