llama.cpp/ggml/src
Talha Can Havadar ae2d3f28a8
ggml: ggml-cpu: force-no-lto-for-cpu-feats (#19609)
When LTO enabled in build environments it forces all builds to have LTO
in place. But feature detection logic is fragile, and causing Illegal
instruction errors with lto. This disables LTO for the feature
detection code to prevent cross-module optimization from inlining
architecture-specific instructions into the score function. Without this,
LTO can cause SIGILL when loading backends on older CPUs (e.g., loading
power10 backend on power9 crashes before feature check runs).
2026-02-17 13:22:46 +02:00
..
ggml-blas ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-cann CANN: Remove unnecessary wrapper for `gml_backend_buft_is_cann` (#18968) 2026-02-10 14:19:30 +08:00
ggml-cpu ggml: ggml-cpu: force-no-lto-for-cpu-feats (#19609) 2026-02-17 13:22:46 +02:00
ggml-cuda cuda : enable CUDA graphs for MMID 1 <= BS <= 4 (#19645) 2026-02-17 12:31:49 +02:00
ggml-hexagon hexagon: further optimizations and refactoring for flash attention (#19583) 2026-02-13 16:27:30 -08:00
ggml-hip HIP: add mmf for CDNA (#18896) 2026-01-29 11:10:53 +01:00
ggml-metal models : optimize qwen3next graph (#19375) 2026-02-14 12:57:36 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl opencl: add basic support for q4_1 (#19534) 2026-02-12 14:52:37 -08:00
ggml-rpc rpc : use unordered_map::reserve and emplace (#18513) 2026-01-02 12:09:36 +02:00
ggml-sycl sycl: add F16 support for GGML_OP_CEIL (#19306) 2026-02-06 23:13:44 +08:00
ggml-virtgpu ggml-virtgpu: make the code thread safe (#19204) 2026-02-04 10:46:18 +08:00
ggml-vulkan vulkan: support L2_NORM with contiguous rows (#19604) 2026-02-14 06:42:04 +01:00
ggml-webgpu [WebGPU] Plug memory leaks and free resources on shutdown (#19315) 2026-02-10 08:04:00 -08:00
ggml-zdnn ggml-zdnn : mark zDNN buffers as non-host (#18967) 2026-01-22 01:16:21 +01:00
ggml-zendnn ggml-zendnn : resolve ZenDNN backend cross-module symbol dependency (#19159) 2026-01-29 12:28:57 +08:00
CMakeLists.txt hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-alloc.c ggml : make `ggml_is_view` as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00
ggml-backend-reg.cpp ggml : use noexcept overload for is_regular_file in backend registration (#19452) 2026-02-10 10:57:48 +01:00
ggml-backend.cpp ggml-backend: fix async set/get fallback sync (#19179) 2026-02-02 10:00:05 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : make `ggml_is_view` as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : make `ggml_is_view` as API (#19539) 2026-02-16 17:43:34 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp GGUF: check that tensor size is representable (#19072) 2026-01-24 21:57:51 +01:00