llama.cpp/ggml/src
Aman Gupta 077c94d0ca
CUDA: add a fused top-K MoE kernel (#16130)
* CUDA: add a fused top-K MoE kernel

This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory

It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models

* Refactor into ggml_cuda_should_use_topk_moe

* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before

* Review: format + micro-optimizations

* Fix bug: fix tie breakers

* Add optional norm + clean-up code

* Use smem for final write

* Add bounds check

* Use better memory pattern for writeback
2025-09-25 16:35:05 +02:00
..
ggml-blas rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-cann rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-cpu ggml : fix loongarch lsx compilation error (#15864) 2025-09-25 12:22:55 +03:00
ggml-cuda CUDA: add a fused top-K MoE kernel (#16130) 2025-09-25 16:35:05 +02:00
ggml-hip HIP: bump requirement to rocm 6.1 (#15296) 2025-08-13 20:44:30 +02:00
ggml-metal metal : fuse NORM + MUL + ADD, support non-multiples of 4 (#16220) 2025-09-25 11:30:16 +03:00
ggml-musa CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433) 2025-08-20 16:58:49 +02:00
ggml-opencl ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-rpc rpc : use ggml logging facilities 2025-09-25 07:20:02 +00:00
ggml-sycl ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-vulkan ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-webgpu ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
CMakeLists.txt cmake : fix static linking for OpenMP on Unix-like systems (#16031) 2025-09-18 23:07:18 +02:00
ggml-alloc.c ggml : split graph allocations according to backend max buffer size (#15815) 2025-09-24 16:17:49 +02:00
ggml-backend-impl.h rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-backend-reg.cpp ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (#15797) 2025-09-11 22:47:38 +02:00
ggml-backend.cpp llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : split graph allocations according to backend max buffer size (#15815) 2025-09-24 16:17:49 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00