This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
d84635b1b0
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
0cc4m
fd123cfead
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (
#12434
)
2025-03-18 07:21:40 +01:00
..
cmake
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
vulkan-shaders
llama: Add support for RWKV v7 architecture (
#12412
)
2025-03-18 07:27:50 +08:00
CMakeLists.txt
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
ggml-vulkan.cpp
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (
#12434
)
2025-03-18 07:21:40 +01:00