This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
0cc4m/vulkan-fix-host-memory-max-size
llama.cpp
/
ggml
History
0cc4m
4b2233befb
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer
2025-06-17 20:25:42 +00:00
..
cmake
ggml-cpu : rework weak alias on apple targets (
#14146
)
2025-06-16 13:54:15 +08:00
include
ggml : remove ggml_graph_import and ggml_graph_export declarations (ggml/1247)
2025-06-01 13:43:57 +03:00
src
Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer
2025-06-17 20:25:42 +00:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
HIP: disable rocwmma on gfx12 by default until rocm 7.0 (
#14202
)
2025-06-16 13:47:38 +02:00