This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
ec997b4f2b
llama.cpp
/
ggml
/
src
/
ggml-webgpu
History
Reese Levine
9e41884dce
Updates to webgpu get_memory (
#18707
)
2026-01-09 08:17:18 -08:00
..
wgsl-shaders
ggml webgpu: initial flashattention implementation (
#18610
)
2026-01-08 08:23:39 -08:00
CMakeLists.txt
ggml webgpu: add support for emscripten builds (
#17184
)
2025-12-03 10:25:34 +01:00
ggml-webgpu-shader-lib.hpp
ggml webgpu: initial flashattention implementation (
#18610
)
2026-01-08 08:23:39 -08:00
ggml-webgpu.cpp
Updates to webgpu get_memory (
#18707
)
2026-01-09 08:17:18 -08:00
pre_wgsl.hpp
ggml webgpu: initial flashattention implementation (
#18610
)
2026-01-08 08:23:39 -08:00