llama.cpp/ggml/src
Aaron Teo 046d5fd44e
llama: use host memory if device reports 0 memory (#18587)
2026-01-09 05:34:56 +08:00
..
ggml-blas sync : whisper.cpp (ggml/1359) 2025-09-29 17:43:58 +03:00
ggml-cann ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH (#18535) 2026-01-08 11:03:21 +02:00
ggml-cpu kleidiai: add and integrate SVE 256-bit vector-length kernel (#18458) 2025-12-30 14:04:53 +02:00
ggml-cuda ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH (#18535) 2026-01-08 11:03:21 +02:00
ggml-hexagon Hexagon add support for f16/f32 flash attention, scale, set-rows and improve f16/32 matmul (#18611) 2026-01-06 17:38:29 -08:00
ggml-hip HIP: fix AMDGPU_TARGETS, update documentation (#16803) 2025-10-27 21:39:49 +01:00
ggml-metal metal : add MoE kernel specialization for ne20=5 (#18667) 2026-01-08 12:37:45 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-rpc rpc : use unordered_map::reserve and emplace (#18513) 2026-01-02 12:09:36 +02:00
ggml-sycl ggml: add env var GGML_OP_OFFLOAD_MIN_BATCH (#18535) 2026-01-08 11:03:21 +02:00
ggml-vulkan vulkan: fix push constant size for quantize_q8_1 (#18687) 2026-01-08 15:40:58 +01:00
ggml-webgpu ggml webgpu: initial flashattention implementation (#18610) 2026-01-08 08:23:39 -08:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
ggml-zendnn ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
CMakeLists.txt kleidiai: add and integrate SVE 256-bit vector-length kernel (#18458) 2025-12-30 14:04:53 +02:00
ggml-alloc.c llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
ggml-backend.cpp vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron (#18295) 2026-01-01 08:58:27 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h cmake: Added more x86_64 CPU backends when building with `GGML_CPU_ALL_VARIANTS=On` (#18186) 2025-12-28 09:33:29 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : fix avx512bf16 build (#18623) 2026-01-06 08:54:10 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00