llama.cpp/ggml/src
Taimur Ahmad abf1277be7
Merge 28e07aad92 into b83111815e
2026-02-06 22:20:41 +02:00
..
ggml-blas ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-cann docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
ggml-cpu Merge 28e07aad92 into b83111815e 2026-02-06 22:20:41 +02:00
ggml-cuda cuda : cuda graphs now compare all node params (#19383) 2026-02-06 07:55:06 +02:00
ggml-hexagon ggml-hexagon: flash-attention and reduce-sum optimizations (#19141) 2026-01-30 21:14:20 -08:00
ggml-hip HIP: add mmf for CDNA (#18896) 2026-01-29 11:10:53 +01:00
ggml-metal metal : skip loading all-zero mask (#19337) 2026-02-06 09:25:11 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl opencl: refactor some ops, concat, repeat, tanh and scale (#19226) 2026-02-02 15:54:43 -08:00
ggml-rpc rpc : use unordered_map::reserve and emplace (#18513) 2026-01-02 12:09:36 +02:00
ggml-sycl sycl: add F16 support for GGML_OP_CEIL (#19306) 2026-02-06 23:13:44 +08:00
ggml-virtgpu ggml-virtgpu: make the code thread safe (#19204) 2026-02-04 10:46:18 +08:00
ggml-vulkan vulkan: For coopmat2 FA, use fp16 accumulators for the final result (#19376) 2026-02-06 09:15:13 +01:00
ggml-webgpu ggml-webgpu: JIT compile binary operators and handle binding overlaps (#19310) 2026-02-06 10:33:30 -08:00
ggml-zdnn ggml-zdnn : mark zDNN buffers as non-host (#18967) 2026-01-22 01:16:21 +01:00
ggml-zendnn ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (#19159) 2026-01-29 12:28:57 +08:00
CMakeLists.txt hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-alloc.c llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend.cpp ggml-backend: fix async set/get fallback sync (#19179) 2026-02-02 10:00:05 +01:00
ggml-common.h
ggml-impl.h ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-opt.cpp
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h
ggml-threading.cpp
ggml-threading.h
ggml.c ggml: added cleanups in ggml_quantize_free (#19278) 2026-02-03 08:43:39 +02:00
ggml.cpp
gguf.cpp GGUF: check that tensor size is representable (#19072) 2026-01-24 21:57:51 +01:00