llama.cpp/ggml
Johannes Gäßler defe2158dd
CUDA: mul_mat_v support for batch sizes > 1 (#14262)
* CUDA: mul_mat_v support for batch sizes > 1

* use 64 bit math for initial offset calculation
2025-06-23 13:11:31 +02:00
..
cmake ggml-cpu : rework weak alias on apple targets (#14146) 2025-06-16 13:54:15 +08:00
include Add `ggml_roll` (ggml/1274) 2025-06-20 21:02:47 +03:00
src CUDA: mul_mat_v support for batch sizes > 1 (#14262) 2025-06-23 13:11:31 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : disable warnings for tests when using MSVC (ggml/1273) 2025-06-18 09:59:21 +03:00