llama.cpp/ggml
Johannes Gäßler 0e6ff0046f
CUDA: larger SRAM reads for tile FA, AMD FP16 dot (#15927)
* CUDA: larger SRAM reads for tile FA, AMD FP16 dot

* fix logic for availability of v_dot2_f32_f16
2025-09-11 21:19:58 +02:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include metal : make the backend async (#15906) 2025-09-10 17:52:35 +03:00
src CUDA: larger SRAM reads for tile FA, AMD FP16 dot (#15927) 2025-09-11 21:19:58 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml-cpu: drop support for nnpa intrinsics (#15821) 2025-09-06 11:27:28 +08:00