llama.cpp/ggml/src
Georgi Gerganov 5e9c635463
metal : add missing mm-id specializations for q1_0 (#21662)
2026-04-09 10:54:00 +03:00
..
ggml-blas ggml-blas: set mkl threads from thread context (#20602) 2026-03-18 01:16:49 +08:00
ggml-cann CANN: fix multi-thread set_tensor race conditions (#20151) 2026-03-31 17:00:51 +03:00
ggml-cpu ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-cuda CUDA: also store `node->src->data` ptrs for equality check (#21635) 2026-04-09 01:01:56 +08:00
ggml-hexagon hexagon: slight optimization for argosrt output init (#21463) 2026-04-05 18:30:25 -07:00
ggml-hip ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-metal metal : add missing mm-id specializations for q1_0 (#21662) 2026-04-09 10:54:00 +03:00
ggml-musa ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-opencl opencl: fix leak in Adreno q8_0 path (#21212) 2026-04-01 12:54:58 -07:00
ggml-openvino fix(openvino): explicit memset in buffer_context allocation (#20857) 2026-03-23 08:05:37 +02:00
ggml-rpc rpc : reuse compute graph buffers (#21299) 2026-04-03 10:28:09 +03:00
ggml-sycl sycl : add flash-attn support for head size 512 (#21654) 2026-04-09 09:36:48 +03:00
ggml-virtgpu ggml-virtgpu: improve the reliability of the code (#19846) 2026-02-26 20:00:57 +08:00
ggml-vulkan vulkan: unify type macros to use Vx instead of _VECx (#21605) 2026-04-09 07:31:51 +02:00
ggml-webgpu webgpu : Query for adapter support when registering WebGPU backend (#21579) 2026-04-08 16:08:29 +03:00
ggml-zdnn ggml-zdnn : mark zDNN buffers as non-host (#18967) 2026-01-22 01:16:21 +01:00
ggml-zendnn ggml-zendnn : add MUL_MAT_ID op support for MoE models (#21315) 2026-04-03 12:19:08 +03:00
CMakeLists.txt ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ggml-alloc.c ggml : make `ggml_is_view` as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ggml-backend.cpp llama : disable graph reuse with pipeline parallelism (#20463) 2026-03-12 21:04:13 +02:00
ggml-common.h ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-impl.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
ggml-opt.cpp fix: free ctx_copy in ggml_opt_free to plug per-training-session leak (#21592) 2026-04-08 17:40:15 +02:00
ggml-quants.c ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-quants.h ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-threading.cpp
ggml-threading.h
ggml.c ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml.cpp
gguf.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00