llama.cpp/ggml/src
Aaron Teo 7998d08b29
ggml-blas: bring back openmp
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
2025-12-14 23:07:54 +08:00
..
ggml-blas ggml-blas: bring back openmp 2025-12-14 23:07:54 +08:00
ggml-cann CANN: add support for partial RoPE and Vision mode (#17543) 2025-12-09 17:53:23 +08:00
ggml-cpu Fix race conditions in threadpool when dealing with dynamic/frequent n_threads changes (#17748) 2025-12-10 12:32:23 -08:00
ggml-cuda cuda : add missing support check for xielu (#17895) 2025-12-10 16:16:20 +01:00
ggml-hexagon ggml-hexagon: fix `rope` failure at `test-backend-ops` (#17565) 2025-12-10 14:45:43 -08:00
ggml-hip HIP: fix AMDGPU_TARGETS, update documentation (#16803) 2025-10-27 21:39:49 +01:00
ggml-metal metal: SSM kernel improvements (#17876) 2025-12-09 21:30:02 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl ggml : add circular tiling support to pad, for Vulkan, CUDA, and CPU (used for making seamless textures) (#16985) 2025-12-06 15:07:02 +01:00
ggml-rpc ggml : improve error handling for search path existence checks (#17653) 2025-12-06 12:28:16 +01:00
ggml-sycl fix softmax for iGPU (#17838) 2025-12-10 16:59:57 +08:00
ggml-vulkan Vulkan: improve mul_mat_vec_iq1_m (#16907) 2025-12-07 18:40:42 +01:00
ggml-webgpu ggml webgpu: unary op suppport, code refactoring, ops support (#17764) 2025-12-05 12:25:51 -08:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
ggml-zendnn ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
CMakeLists.txt ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
ggml-alloc.c ggml : allow fill node alloc inplace (#17870) 2025-12-09 12:23:47 +01:00
ggml-backend-impl.h rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
ggml-backend-reg.cpp ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
ggml-backend.cpp ggml : remove redundant n_copies check when setting input/output (#17612) 2025-12-02 12:52:45 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : remove GGML_KQ_MASK_PAD constant (#17910) 2025-12-10 20:53:16 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00