llama.cpp/ggml/src
Jianhui Zhou 11b753e786 ggml: optimize repack on NUMA by binding threads
When using repack buffer type, the physical memory allocation is dictated
by the first-touch policy. Since the main thread performs the write
operations, memory is often allocated on a single NUMA node, leading to
uneven weight distribution.

Multi-threaded repack can alleviate this problem, but the threads are
not bound to NUMA nodes.

This patch applies the same thread affinity strategy (--numa distribute)
to the repacking phase. By binding the repack threads to the same nodes
as the compute threads, we ensure that weights are written (and thus
allocated) on the local NUMA node, minimizing cross-node memory access
during inference.

Performance on Intel Xeon Silver 4514Y (32 core):
qwen3 8B  Q4_K: 19.39 -> 26.92 t/s (+39%)
qwen3 32B Q4_K:  4.99 ->  7.38 t/s (+48%)

Signed-off-by: Jianhui Zhou <jonaszhou@zhaoxin.com>
2026-01-08 14:12:59 +00:00
..
ggml-blas sync : whisper.cpp (ggml/1359) 2025-09-29 17:43:58 +03:00
ggml-cann cann : fix ops broken by circular padding guard (#17825) 2025-12-12 15:49:27 +01:00
ggml-cpu ggml: optimize repack on NUMA by binding threads 2026-01-08 14:12:59 +00:00
ggml-cuda CUDA: fix overflow in MMA kernel without stream-k (#17939) 2025-12-12 17:43:58 +01:00
ggml-hexagon ggml-hexagon: fix `rope` failure at `test-backend-ops` (#17565) 2025-12-10 14:45:43 -08:00
ggml-hip HIP: fix AMDGPU_TARGETS, update documentation (#16803) 2025-10-27 21:39:49 +01:00
ggml-metal metal: SSM kernel improvements (#17876) 2025-12-09 21:30:02 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl ggml : add circular tiling support to pad, for Vulkan, CUDA, and CPU (used for making seamless textures) (#16985) 2025-12-06 15:07:02 +01:00
ggml-rpc ggml : improve error handling for search path existence checks (#17653) 2025-12-06 12:28:16 +01:00
ggml-sycl fix softmax for iGPU (#17838) 2025-12-10 16:59:57 +08:00
ggml-vulkan vulkan: fix mul_mat_vec_iq1_s formatting (#18026) 2025-12-14 14:52:46 +01:00
ggml-webgpu ggml webgpu: unary op suppport, code refactoring, ops support (#17764) 2025-12-05 12:25:51 -08:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
ggml-zendnn ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
CMakeLists.txt ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
ggml-alloc.c ggml-alloc : fix reuse-parent logic for misaligned sizes (#17884) 2025-12-11 14:30:10 +02:00
ggml-backend-impl.h rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
ggml-backend-reg.cpp ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
ggml-backend.cpp ggml : remove redundant n_copies check when setting input/output (#17612) 2025-12-02 12:52:45 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : remove GGML_KQ_MASK_PAD constant (#17910) 2025-12-10 20:53:16 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00