llama.cpp/ggml
Georgi Gerganov 4d0dcd4a06
cuda : fix rope with partial rotation and non-cont src (#14580)
* cuda : fix rope non-cont

ggml-ci

* cont : fix multi-rope + add test

ggml-ci

* sycl : try fix

ggml-ci

* cont : fix sycl + clean-up cuda

ggml-ci
2025-07-08 10:15:21 +03:00
..
cmake ggml-cpu : rework weak alias on apple targets (#14146) 2025-06-16 13:54:15 +08:00
include CUDA: add bilinear interpolation for upscale (#14563) 2025-07-08 10:11:18 +08:00
src cuda : fix rope with partial rotation and non-cont src (#14580) 2025-07-08 10:15:21 +03:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : remove kompute backend (#14501) 2025-07-03 07:48:32 +03:00