llama.cpp/.github
Reese Levine 587d0118f5
ggml: WebGPU backend host improvements and style fixing (#14978)
* Add parameter buffer pool, batching of submissions, refactor command building/submission

* Add header for linux builds

* Free staged parameter buffers at once

* Format with clang-format

* Fix thread-safe implementation

* Use device implicit synchronization

* Update workflow to use custom release

* Remove testing branch workflow
2025-08-04 08:52:43 -07:00
..
ISSUE_TEMPLATE ggml : remove kompute backend (#14501) 2025-07-03 07:48:32 +03:00
actions releases : use arm version of curl for arm releases (#13592) 2025-05-16 19:36:51 +02:00
workflows ggml: WebGPU backend host improvements and style fixing (#14978) 2025-08-04 08:52:43 -07:00
labeler.yml ggml : remove kompute backend (#14501) 2025-07-03 07:48:32 +03:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00