This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
5744d7ec43
llama.cpp
/
ggml
/
src
/
ggml-webgpu
History
Reese Levine
8ced5f41f9
Move to no timeout for WaitAny in graph submission to avoid deadlocks in some cases on llvm-pipe backends (
#20618
)
2026-03-18 10:23:47 -07:00
..
wgsl-shaders
ggml-webgpu: Add supports for `GGML_OP_REPEAT` (
#20230
)
2026-03-11 14:40:36 -07:00
CMakeLists.txt
ggml webgpu: add support for emscripten builds (
#17184
)
2025-12-03 10:25:34 +01:00
ggml-webgpu-shader-lib.hpp
ggml-webgpu: Add supports for `GGML_OP_REPEAT` (
#20230
)
2026-03-11 14:40:36 -07:00
ggml-webgpu.cpp
Move to no timeout for WaitAny in graph submission to avoid deadlocks in some cases on llvm-pipe backends (
#20618
)
2026-03-18 10:23:47 -07:00
pre_wgsl.hpp
ggml webgpu: initial flashattention implementation (
#18610
)
2026-01-08 08:23:39 -08:00