llama.cpp/ggml
Jeff Bolz a9f7541ec2
vulkan: optimizations for direct convolution (#14933)
* vulkan: optimizations for direct convolution

- Empirically choose a better tile size. Reducing BS_K/BS_NPQ helps fill
  the GPU. The new size should be amenable to using coopmat, too.
- Fix shmem bank conflicts. 16B padding should work with coopmat.
- Some explicit loop unrolling.
- Skip math/stores work for parts of the tile that are OOB.
- Apply fastdiv opt.
- Disable shuffles for NV.

* Three tiles sizes for CONV_2D, and a heuristic to choose

* reallow collectives for pre-Turing

* make SHMEM_PAD a spec constant

* fixes for intel perf - no shmem padding, placeholder shader core count

* shader variants with/without unrolling

* 0cc4m's fixes for AMD perf

Co-authored-by: 0cc4m <picard12@live.de>

---------

Co-authored-by: 0cc4m <picard12@live.de>
2025-08-02 09:57:04 +02:00
..
cmake cmake : Fix BLAS link interface (ggml/1316) 2025-07-30 17:33:11 +03:00
include ggml: Add initial WebGPU backend (#14521) 2025-07-16 18:18:51 +03:00
src vulkan: optimizations for direct convolution (#14933) 2025-08-02 09:57:04 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (#14930) 2025-07-29 17:44:30 +02:00