llama.cpp/ggml/src
Jeff Bolz b3e585988f
vulkan: Optimize soft_max (#10301)
* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.
2024-11-19 08:25:17 +01:00
..
ggml-amx ggml : adapt AMX to tensor->grad removal (#0) 2024-11-17 08:30:29 +02:00
ggml-blas ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cann ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu ggml : fix undefined reference to 'getcpu' (#10354) 2024-11-17 10:39:22 +02:00
ggml-cuda cuda : only use native when supported by cmake (#10389) 2024-11-18 18:43:40 +01:00
ggml-hip CUDA: remove DMMV, consolidate F16 mult mat vec (#10318) 2024-11-17 09:09:55 +01:00
ggml-kompute ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-metal metal : refactor kernel args into structs (#10238) 2024-11-17 11:23:01 +02:00
ggml-musa CUDA: remove DMMV, consolidate F16 mult mat vec (#10318) 2024-11-17 09:09:55 +01:00
ggml-rpc ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-sycl sycl: Revert MUL_MAT_OP support changes (#10385) 2024-11-19 08:50:04 +08:00
ggml-vulkan vulkan: Optimize soft_max (#10301) 2024-11-19 08:25:17 +01:00
CMakeLists.txt ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
ggml-aarch64.c ggml : optimize Q4_0 into Q4_0_X_Y repack (#10324) 2024-11-16 01:53:37 +01:00
ggml-aarch64.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-alloc.c ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
ggml-backend-impl.h llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
ggml-backend-reg.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-backend.cpp llama : only use default buffer types for the KV cache (#10358) 2024-11-17 12:25:45 +01:00
ggml-common.h ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
ggml-impl.h ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
ggml-opt.cpp ggml : inttypes.h -> cinttypes (#0) 2024-11-17 08:30:29 +02:00
ggml-quants.c ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml.c ggml : fix compile warnings (#0) 2024-11-17 08:30:29 +02:00