llama.cpp/.github/workflows
Reese Levine 15bff84bf5
ggml webgpu: initial flashattention implementation (#18610)
* FlashAttention (#13)

* Add inplace softmax

* Move rms_norm to split row approach

* Update debug for supports_op

* clean up debug statements

* neg f16xf32xip builds and runs, havent actually ran a model that uses neg kernel yet though

* neg passes backend test

* unary operators pass ggml tests

* rms_norm double declaration bug atoned

* abides by editor-config

* removed vestigial files

* fixed autoconfig

* All operators (inlcluding xielu) working

* removed unnecesarry checking if node->src[1] exists for unary operators

* responded and dealt with PR comments

* implemented REPL_Template support and removed bug in unary operators kernel

* formatted embed wgsl and ggml-webgpu.cpp

* Faster tensors (#8)

Add fast matrix and matrix/vector multiplication.

* Use map for shader replacements instead of pair of strings

* Wasm (#9)

* webgpu : fix build on emscripten

* more debugging stuff

* test-backend-ops: force single thread on wasm

* fix single-thread case for init_tensor_uniform

* use jspi

* add pthread

* test: remember to set n_thread for cpu backend

* Add buffer label and enable dawn-specific toggles to turn off some checks

* Intermediate state

* Fast working f16/f32 vec4

* Working float fast mul mat

* Clean up naming of mul_mat to match logical model, start work on q mul_mat

* Setup for subgroup matrix mat mul

* Basic working subgroup matrix

* Working subgroup matrix tiling

* Handle weirder sg matrix sizes (but still % sg matrix size)

* Working start to gemv

* working f16 accumulation with shared memory staging

* Print out available subgroup matrix configurations

* Vectorize dst stores for sg matrix shader

* Gemv working scalar

* Minor set_rows optimization (#4)

* updated optimization, fixed errors

* non vectorized version now dispatches one thread per element

* Simplify

* Change logic for set_rows pipelines

---------

Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Comment on dawn toggles

* Working subgroup matrix code for (semi)generic sizes

* Remove some comments

* Cleanup code

* Update dawn version and move to portable subgroup size

* Try to fix new dawn release

* Update subgroup size comment

* Only check for subgroup matrix configs if they are supported

* Add toggles for subgroup matrix/f16 support on nvidia+vulkan

* Make row/col naming consistent

* Refactor shared memory loading

* Move sg matrix stores to correct file

* Working q4_0

* Formatting

* Work with emscripten builds

* Fix test-backend-ops emscripten for f16/quantized types

* Use emscripten memory64 to support get_memory

* Add build flags and try ci

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>

* Remove extra whitespace

* Move wasm single-thread logic out of test-backend-ops for cpu backend

* Disable multiple threads for emscripten single-thread builds in ggml_graph_plan

* Refactored pipelines and workgroup calculations (#10)

* refactored pipelines

* refactored workgroup calculation

* removed commented out block of prior maps

* Clean up ceiling division pattern

---------

Co-authored-by: Neha Abbas <nehaabbas@eduroam-169-233-141-223.ucsc.edu>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>

* Start work on flash attention

* Shader structure set up (many bugs still)

* debugging

* Working first test

* Working with head grouping, head sizes to 128, logit softcap, mask/sinks enabled, f32

* Generalize softmax to work with multiple subgroups, f16 accumulation, mask shared memory tiling

* Start work on integrating pre-wgsl

* Separate structs/initial shader compilation library into separate files

* Work on compilation choices for flashattention

* Work on subgroup matrix/tile size portability

* subgroup size agnostic online softmax

* Cleanups, quantization types

* more cleanup

* fix wasm build

* Refactor flashattention to increase parallelism, use direct loads for KV in somce cases

* Checkpoint

* formatting

* Update to account for default kv cache padding

* formatting shader

* Add workflow for ggml-ci webgpu

* Try passing absolute path to dawn in ggml-ci

* Avoid error on device destruction, add todos for proper cleanup

* Fix unused warning

* Forgot one parameter unused

* Move some flashattn computation to f32 for correctness
2026-01-08 08:23:39 -08:00
..
bench.yml.disabled llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
build-cache.yml ci : refactor sdk caching to minimize storage (#16414) 2025-10-06 17:40:21 +02:00
build-cmake-pkg.yml ci: add workflow for relocatable cmake package (#14346) 2025-06-23 15:30:51 -03:00
build-linux-cross.yml ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) 2025-12-08 10:41:34 +02:00
build.yml ggml webgpu: initial flashattention implementation (#18610) 2026-01-08 08:23:39 -08:00
check-vendor.yml ci: add check vendor job (#17179) 2025-11-12 14:56:02 +01:00
close-issue.yml ci : exempt correct research label (#15825) 2025-09-06 01:21:15 +02:00
copilot-setup-steps.yml ci : add copilot-instructions.md (#15286) 2025-08-21 11:47:52 +02:00
docker.yml docker : add CUDA 13.1 image build (#18441) 2025-12-30 22:28:53 +01:00
editorconfig.yml ci : pin dependency to specific version (#11137) 2025-01-08 12:07:20 +01:00
gguf-publish.yml ci : update checkout, setup-python and upload-artifact to latest (#6456) 2024-04-03 21:01:13 +03:00
labeler.yml repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
pre-tokenizer-hashes.yml ci : check that pre-tokenizer hashes are up-to-date (#15032) 2025-08-02 14:39:01 +02:00
python-check-requirements.yml py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00
python-lint.yml ci : add ubuntu cuda build, build with one arch on windows (#10456) 2024-11-26 13:05:07 +01:00
python-type-check.yml ci : reduce severity of unused Pyright ignore comments (#9697) 2024-09-30 14:13:16 -04:00
release.yml sampling : add support for backend sampling (#17004) 2026-01-04 22:22:16 +02:00
server-webui.yml ci : clean up webui jobs (#18116) 2025-12-17 10:45:40 +01:00
server.yml sampling : add support for backend sampling (#17004) 2026-01-04 22:22:16 +02:00
update-ops-docs.yml ci : avoid manual updates of docs/ops.md (#16663) 2025-10-19 14:03:25 +02:00
winget.yml ci : fix winget workflow (#17790) 2025-12-05 19:44:17 +08:00