llama.cpp/ggml/src/ggml-webgpu
Georgi Gerganov 365a3e8c31
ggml : add ggml_build_forward_select (#18550)
* ggml : add ggml_build_forward_select

* cuda : adapt CUDA graph compat to new feature

* vulkan : update logic to handle command buffer closing

* ggml : check compute for fusion

* ggml : add comment
2026-01-19 20:03:19 +02:00
..
wgsl-shaders ggml webgpu: support for backend sampling (#18880) 2026-01-16 16:12:43 -08:00
CMakeLists.txt ggml webgpu: add support for emscripten builds (#17184) 2025-12-03 10:25:34 +01:00
ggml-webgpu-shader-lib.hpp ggml webgpu: support for backend sampling (#18880) 2026-01-16 16:12:43 -08:00
ggml-webgpu.cpp ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
pre_wgsl.hpp ggml webgpu: initial flashattention implementation (#18610) 2026-01-08 08:23:39 -08:00