llama.cpp/ggml
Jeff Bolz 63a7bb3c7e
vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378)
2025-06-28 17:36:40 +02:00
..
cmake ggml-cpu : rework weak alias on apple targets (#14146) 2025-06-16 13:54:15 +08:00
include ggml : add ggml_set_rows (#14274) 2025-06-27 16:41:40 +03:00
src vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (#14378) 2025-06-28 17:36:40 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) 2025-06-25 23:49:04 +02:00