llama.cpp/ggml/src
Daniele cf2270e4d3
vulkan: subgroup size tuning (#12087)
* vulkan: subgroup size test

* Vulkan: Add device architecture enum and logic to recognize AMD generations

* vulkan: use new architecture logic to specify subgroup size

* Initial vulkan subgroup size tuning for RDNA3

* vulkan: commonize RDNA subgroup tuning

* vulkan: override subgroup size if required_subgroup_size = 0

* vulkan: disable warp 32 for RDNA3

* vulkan: fine tuned RDNA1 subgroup sizes

* vulkan: adjusted subgroup size map

* vulkan: fixed RDNA2 subgroup map

---------

Co-authored-by: 0cc4m <picard12@live.de>
2025-03-17 12:42:33 +01:00
..
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann [CANN]MUL_MAT optimization (#12382) 2025-03-15 09:31:08 +08:00
ggml-cpu ggml : ggml_compute_forward_concat() for arbitrary tensor type (ggml/1118) 2025-03-07 14:49:44 +02:00
ggml-cuda CUDA/HIP: Fix fattn-vec-* when device warp size is not 32 (#12315) 2025-03-12 10:14:11 +01:00
ggml-hip HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (#12032) 2025-03-03 22:10:54 +01:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal metal : Cache the Metal library at the device context level (#12265) 2025-03-11 13:45:02 +02:00
ggml-musa musa: support new arch mp_31 and update doc (#12296) 2025-03-10 18:18:25 +01:00
ggml-opencl opencl: use OpenCL C standard supported by the device (#12221) 2025-03-10 09:57:00 -07:00
ggml-rpc ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-sycl SYCL: set extras only on GGML_TYPE_Q4_0 (#12366) 2025-03-17 09:45:12 +08:00
ggml-vulkan vulkan: subgroup size tuning (#12087) 2025-03-17 12:42:33 +01:00
CMakeLists.txt cmake : enable building llama.cpp using system libggml (#12321) 2025-03-17 11:05:23 +02:00
ggml-alloc.c ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend-reg.cpp ggml-backend : fix backend search path (#12330) 2025-03-11 14:25:17 +01:00
ggml-backend.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ggml-common.h CUDA: use arch list for compatibility check (#11775) 2025-02-11 00:17:22 +01:00
ggml-impl.h MUSA: support ARM64 and enable dp4a .etc (#11843) 2025-02-21 09:46:23 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
ggml-quants.c ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : ggml_compute_forward_concat() for arbitrary tensor type (ggml/1118) 2025-03-07 14:49:44 +02:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00