llama.cpp/ggml/src
Vithule, Prashant 9e15d138f2 If SVE 256 not present then was using generic function to compute, hence slowing the performance.
So added code if SVE 256 is not present then use NEON code.
2026-02-13 11:04:38 +05:30
..
ggml-blas ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-cann CANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (#18968) 2026-02-10 14:19:30 +08:00
ggml-cpu If SVE 256 not present then was using generic function to compute, hence slowing the performance. 2026-02-13 11:04:38 +05:30
ggml-cuda Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461) 2026-02-12 09:38:35 +01:00
ggml-hexagon hexagon: fix typo in vtcm_needs_release (#19545) 2026-02-12 15:07:49 -08:00
ggml-hip HIP: add mmf for CDNA (#18896) 2026-01-29 11:10:53 +01:00
ggml-metal metal : update sum_rows kernel to support float4 (#19524) 2026-02-12 11:35:28 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl opencl: add basic support for q4_1 (#19534) 2026-02-12 14:52:37 -08:00
ggml-rpc rpc : use unordered_map::reserve and emplace (#18513) 2026-01-02 12:09:36 +02:00
ggml-sycl sycl: add F16 support for GGML_OP_CEIL (#19306) 2026-02-06 23:13:44 +08:00
ggml-virtgpu ggml-virtgpu: make the code thread safe (#19204) 2026-02-04 10:46:18 +08:00
ggml-vulkan vulkan: For coopmat2 FA, use fp16 accumulators for the final result (#19376) 2026-02-06 09:15:13 +01:00
ggml-webgpu [WebGPU] Plug memory leaks and free resources on shutdown (#19315) 2026-02-10 08:04:00 -08:00
ggml-zdnn ggml-zdnn : mark zDNN buffers as non-host (#18967) 2026-01-22 01:16:21 +01:00
ggml-zendnn ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (#19159) 2026-01-29 12:28:57 +08:00
CMakeLists.txt hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-alloc.c llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp ggml : use noexcept overload for is_regular_file in backend registration (#19452) 2026-02-10 10:57:48 +01:00
ggml-backend.cpp ggml-backend: fix async set/get fallback sync (#19179) 2026-02-02 10:00:05 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : unary ops support non-cont src0 + metal F16 unary ops (#19511) 2026-02-11 18:58:43 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp GGUF: check that tensor size is representable (#19072) 2026-01-24 21:57:51 +01:00