llama.cpp/ggml
Aman Gupta 48e2fa9fb7
CUDA: add fp kernel for larger batch size MoE (#16512)
* CUDA: kernel for larger batch sizes for MoE

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* fixup

* tests

* Move mmq_ids_helper to mmid

* cleanup

* Remove redundant checks
2025-10-14 13:15:15 +02:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
src CUDA: add fp kernel for larger batch size MoE (#16512) 2025-10-14 13:15:15 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml webgpu: profiling, CI updates, reworking of command submission (#16452) 2025-10-07 13:48:56 -07:00