llama.cpp/ggml/src
Taimur Ahmad fd94e4cdca ggml-cpu: add repack GEMM and GEMV for floating-point (#4) 2026-04-02 22:17:28 +05:00
..
ggml-blas ggml-blas: set mkl threads from thread context (#20602) 2026-03-18 01:16:49 +08:00
ggml-cann CANN: fix multi-thread set_tensor race conditions (#20151) 2026-03-31 17:00:51 +03:00
ggml-cpu ggml-cpu: add repack GEMM and GEMV for floating-point (#4) 2026-04-02 22:17:28 +05:00
ggml-cuda CUDA: fix FA kernel selection logic (#21271) 2026-04-01 22:28:19 +03:00
ggml-hexagon hexagon : add cumsum op support (#21246) 2026-04-01 17:44:02 -07:00
ggml-hip ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-metal metal : Fix dimension constraint violation in matmul2d descriptor (#21048) 2026-03-27 09:05:21 +02:00
ggml-musa ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-opencl opencl: fix leak in Adreno q8_0 path (#21212) 2026-04-01 12:54:58 -07:00
ggml-openvino fix(openvino): explicit memset in buffer_context allocation (#20857) 2026-03-23 08:05:37 +02:00
ggml-rpc rpc : fix misleading error log (#21184) 2026-03-30 17:05:11 +03:00
ggml-sycl sycl : fix llama_kv_cache hang when kv_cache is huge: 5GB (#21283) 2026-04-02 10:08:32 +03:00
ggml-virtgpu ggml-virtgpu: improve the reliability of the code (#19846) 2026-02-26 20:00:57 +08:00
ggml-vulkan vulkan: add noncontiguous GLU support (#21081) 2026-03-28 08:44:56 +01:00
ggml-webgpu ggml webgpu: quantized buffers to u32 + wider browser/device support (#21046) 2026-04-01 08:38:24 +03:00
ggml-zdnn
ggml-zendnn ggml-zendnn: update code for latest ZenDNN API (#19923) 2026-02-27 08:43:41 +08:00
CMakeLists.txt ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ggml-alloc.c ggml : make `ggml_is_view` as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-backend-dl.cpp
ggml-backend-dl.h
ggml-backend-impl.h
ggml-backend-reg.cpp ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ggml-backend.cpp llama : disable graph reuse with pipeline parallelism (#20463) 2026-03-12 21:04:13 +02:00
ggml-common.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-impl.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
ggml-opt.cpp
ggml-quants.c ggml : guard against sumq2 being 0 in IQ4_NL (#20460) 2026-03-15 10:47:28 +02:00
ggml-quants.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-threading.cpp
ggml-threading.h
ggml.c mtmd: Add DeepSeekOCR Support (#17400) 2026-03-25 19:57:40 +01:00
ggml.cpp
gguf.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00