llama.cpp/ggml
Aman Gupta 8c988fa41d
CUDA: add fused rms norm (#14800)
2025-07-23 09:25:42 +08:00
..
cmake ggml-cpu : rework weak alias on apple targets (#14146) 2025-06-16 13:54:15 +08:00
include ggml: Add initial WebGPU backend (#14521) 2025-07-16 18:18:51 +03:00
src CUDA: add fused rms norm (#14800) 2025-07-23 09:25:42 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml: Add initial WebGPU backend (#14521) 2025-07-16 18:18:51 +03:00