Commit graph

7,407 commits

Author SHA1 Message Date
Jianhui Zhou
5714d4b86e ggml: Add thread count control during repacking
This change enables the repack stage to utilize the user-specified
thread count, ensuring that both the logical thread IDs and the total
number of threads remain consistent between the repack and inference
stages.

In a NUMA architecture where the `--numa distribute` parameter is used,
logical threads are pinned to specific physical NUMA nodes. By aligning
the thread configuration across these two stages, we can fully leverage
the operating system's "first-touch" memory allocation policy:

1. Repack Stage: Logical thread i (bound to NUMA node j) is responsible
   for repacking and writing the weight data. Since the "first touch"
   occurs within this thread, the corresponding physical memory is
   allocated on node j.

2. Inference Stage: The same logical thread i (still bound to node j)
   reads these weights. Since the data already resides on the local
   node, low-latency local memory access is achieved.

Without ensuring consistency in the number of threads, data may be
randomly allocated to mismatched nodes, resulting in significant
cross-node access overhead during inference.

Signed-off-by: Jianhui Zhou <jonaszhou@zhaoxin.com>
2026-01-13 07:36:31 +00:00
Jianhui Zhou
11b753e786 ggml: optimize repack on NUMA by binding threads
When using repack buffer type, the physical memory allocation is dictated
by the first-touch policy. Since the main thread performs the write
operations, memory is often allocated on a single NUMA node, leading to
uneven weight distribution.

Multi-threaded repack can alleviate this problem, but the threads are
not bound to NUMA nodes.

This patch applies the same thread affinity strategy (--numa distribute)
to the repacking phase. By binding the repack threads to the same nodes
as the compute threads, we ensure that weights are written (and thus
allocated) on the local NUMA node, minimizing cross-node memory access
during inference.

Performance on Intel Xeon Silver 4514Y (32 core):
qwen3 8B  Q4_K: 19.39 -> 26.92 t/s (+39%)
qwen3 32B Q4_K:  4.99 ->  7.38 t/s (+48%)

Signed-off-by: Jianhui Zhou <jonaszhou@zhaoxin.com>
2026-01-08 14:12:59 +00:00
pestopoppa
b1366757cf ggml-cpu: parallelize tensor repacking with OpenMP
Add OpenMP parallelization to tensor repack functions to significantly
speed up model loading on many-core CPUs.

Measured on AMD EPYC 9655 (96 cores):

| Model Size | Before | After | Speedup |
|------------|--------|-------|---------|
| 6.8GB Q4_K | 5.0s   | 3.3s  | 1.5x    |
| 19GB Q4_K  | 11.9s  | 5.3s  | 2.2x    |
| 271GB Q4_K | ~150s  | ~60s  | ~2.5x   |

The repack functions convert quantized tensors from storage layout
to SIMD-optimized layout for AVX-512. This was previously single-threaded
and is now parallelized across row groups.

Key changes:
- Convert pointer-increment loops to explicit indexing
- Add #pragma omp parallel for to outer loops (guarded by #ifdef _OPENMP)
- Each thread processes independent row groups
- Move thread-local dst_tmp arrays inside parallel region

Functions parallelized:
- repack_q4_0_to_q4_0_4_bl (Q4_0 x4 interleave)
- repack_q4_K_to_q4_K_8_bl (Q4_K_M, Q4_K_S models)
- repack_q2_K_to_q2_K_8_bl (Q2_K models)
- repack_q4_0_to_q4_0_8_bl (Q4_0 x8 interleave)
- repack_iq4_nl_to_iq4_nl_4_bl (IQ4_NL x4)
- repack_iq4_nl_to_iq4_nl_8_bl (IQ4_NL x8)

Tested on: AMD EPYC 9655 "Turin" with 192 threads
2026-01-01 12:51:30 +01:00
Xuan-Son Nguyen
52392291b2
preset: handle negated arg, reverse the meaning if needed (#18041) b7404 2025-12-14 22:08:10 +01:00
Sigbjørn Skjæret
5c8a717128
convert : refactor rope scaling handling (#18013)
* refactor rope scaling handling

* ws--

* missed a couple

* use find_hparam
2025-12-14 16:04:37 +01:00
Haowei Wu
37f5a1093b
mtmd: enhance image resizing in llava_uhd (#18014) b7402 2025-12-14 15:57:52 +01:00
Ruben Ortlam
9e6649ecf2
vulkan: fix mul_mat_vec_iq1_s formatting (#18026) b7401 2025-12-14 14:52:46 +01:00
Xuan-Son Nguyen
0759b09c90
graph: add f_attn_temp_offset (#18025) b7400 2025-12-14 13:05:59 +01:00
Georgi Gerganov
254098a279
common : refactor common_sampler + grammar logic changes (#17937) b7399
* common : refactor common_sampler + grammar logic changes

* tests : increase max_tokens to get needed response

* batched : fix uninitialized samplers
2025-12-14 10:11:13 +02:00
Jeff Bolz
3238b1400c
vulkan: Fix data race/hang in scalar/cm1 flash attention (#17887) b7398 2025-12-14 09:00:00 +01:00
lovedheart
4722671641
vulkan: improve mul_mat_vec_iq1_s speed (#17874) b7397 2025-12-14 08:47:49 +01:00
Eve
d15d177f43
vulkan: faster q6_k matmul (#17813)
* q6_k faster mul mat

* 8 values

* fix comment

* switch to two at a time

* start ci for .glsl files
2025-12-14 08:29:37 +01:00
Georgi Gerganov
77ad8542bd
model-conversion : cast logits to float32 (#18009) 2025-12-14 08:58:13 +02:00
Georgi Gerganov
609a2d0268
models : fix YaRN regression + consolidate logic (#18006) b7394
* models : fix YaRN regression + consolidate logic

* cont : fix the fix

* cont : remove header

* cont : add header
2025-12-14 08:34:56 +02:00
Georgi Gerganov
a63cbafbbc ggml : arm repack fix build b7393 2025-12-14 08:33:51 +02:00
Georgi Gerganov
0e59224990 sync : ggml 2025-12-14 08:33:51 +02:00
Georgi Gerganov
71fdcf0616 ggml : arm repack fix build (whisper/0) 2025-12-14 08:33:51 +02:00
Congcong Cai
615655aafe cmake : set CMAKE_RUNTIME_OUTPUT_DIRECTORY for non standalone build (ggml/1394)
Some backend depends on CMAKE_RUNTIME_OUTPUT_DIRECTORY to create temporary file like metal backened.
Missing CMAKE_RUNTIME_OUTPUT_DIRECTORY will cause some cmake error like permission denied (try to copy file to root).
This PR wants to setup a default path for CMAKE_RUNTIME_OUTPUT_DIRECTORY when it does not exist.
2025-12-14 08:33:51 +02:00
Xuan-Son Nguyen
c00ff929dc
scripts: add script to compare logprobs of llama.cpp against other frameworks (#17947) b7389
* scripts: add script to compare logits of llama.cpp against other frameworks

* accept custom prompt file

* fix code style

* clarify endpoint

* fix displaying

* use abs for diff

* fix vllm case

* rm output file

* rename to compare-logprobs

* add "pattern"
2025-12-13 22:33:29 +01:00
Sergey Fedorov
4ed2bae50d
server-models.cpp: add missing <filesystem> (#18000) b7388
Fixes: https://github.com/ggml-org/llama.cpp/issues/17999
2025-12-13 22:02:43 +01:00
Jeff Bolz
5266379bca
llama_context: synchronize before reallocating output buffer (#17974) b7387 2025-12-13 09:19:51 -06:00
Xuan-Son Nguyen
4d5ae24c0a
arg: fix common_params_parse not accepting negated arg (#17991) b7386 2025-12-13 12:53:37 +01:00
Gustavo Rocha Dias
66ba51252e
cmake: correct scope - link ws2_32 for MinGW/w64devkit builds in cpp-httplib (#17972) b7385
* fix - w64devkit build

* fix - w64devkit build private scope
2025-12-13 12:46:36 +01:00
Jeff Bolz
36255a2268
vulkan: support get_rows for i32 (#17941) b7384 2025-12-13 10:12:53 +01:00
Jeff Bolz
3229a23fa6
vulkan: support GGML_OP_DIAG (#17893) b7383 2025-12-13 10:07:49 +01:00
Jeff Bolz
303f8615e9
vulkan: Multi-pass softmax for large number of cols (#17892) b7382
When the number of cols is large, split each row across multiple workgroups.
There are three phases that communicate partial results through temp buffers:
(1) compute max partials
(2) take max of partials, compute sum(exp(x-max)) partials
(3) sum partials, compute scaled result
2025-12-13 10:04:29 +01:00
Georgi Gerganov
3c6391e748
speculative-simple : free batch on exit (#17985) b7381 2025-12-13 09:48:34 +02:00
Sigbjørn Skjæret
8e4d678528
common : skip model validation when --completion-bash is requested (#17975) b7380 2025-12-13 08:40:50 +01:00
Jeff Bolz
07a10c1090
vulkan: Allow non-pow2 n_experts in topk_moe (#17872) b7379 2025-12-13 08:40:04 +01:00
Sigbjørn Skjæret
2bc94e7928
add llama-completion to completion-bash executables (#17976) b7378 2025-12-13 08:35:50 +01:00
Daniel Bevenius
fd1085ffb7
model-conversion : use CONVERTED_MODEL value for converted model [no ci] (#17984)
* model-conversion : use CONVERTED_MODEL value for converted model [no ci]

This commit updates the model verification scripts to use the
CONVERTED_MODEL environment variable instead of using the MODEL_PATH
(the original model path) as the basis for the converted model file
name.

The motivation for this that currently if the converted model file name
differs from the original model directory/name the verification scripts
will look for the wrong .bin files that were generating when running the
models.
For example, the following steps were not possible:
```console
(venv) $ huggingface-cli download google/gemma-3-270m-it --local-dir ggml-org/gemma-3-270m
(venv) $ python3 convert_hf_to_gguf.py ggml-org/gemma-3-270m --outfile test-bf16.gguf --outtype bf16
(venv) $ cd examples/model-conversion/
(venv) $ export MODEL_PATH=../../ggml-org/gemma-3-270m
(venv) $ export CONVERTED_MODEL=../../test-bf16.gguf
(venv) $ make causal-verify-logits
...
Data saved to data/llamacpp-test-bf16.bin
Data saved to data/llamacpp-test-bf16.txt
Error: llama.cpp logits file not found: data/llamacpp-gemma-3-270m.bin
Please run scripts/run-converted-model.sh first to generate this file.
make: *** [Makefile:62: causal-verify-logits] Error 1
```

With the changes in this commit, the above steps will now work as
expected.
2025-12-13 08:34:26 +01:00
Xuan-Son Nguyen
380b4c984e
common: support negated args (#17919) b7376
* args: support negated args

* update docs

* fix typo

* add more neg options

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* rm duplicated arg

* fix LLAMA_ARG_NO_HOST

* add test

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-12-12 23:58:53 +01:00
Xuan-Son Nguyen
e39a2ce66d
clip: move model cgraphs into their own files (#17965) b7375
* clip: move model cgraphs into their own files

* more explicit enums

* fix linux build

* fix naming

* missing headers

* nits: add comments for contributors
2025-12-12 21:14:48 +01:00
jiahao su
a8c7f33d79
ci : change the cann version and the container pull method (#17953) b7374
fix error format

Update build.yml

Remove unnecessary zip files

fix

update
2025-12-12 20:43:00 +01:00
Sigbjørn Skjæret
b7f5f46e03
docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
Johannes Gäßler
482211438d
CUDA: fix overflow in MMA kernel without stream-k (#17939) b7372 2025-12-12 17:43:58 +01:00
Georgi Gerganov
7bed317f53
models : fix the attn_factor for mistral3 graphs + improve consistency (#17945) b7371
* models : fix the attn_factor for mistral3 graphs

* cont : rework attn_factor correction logic

* cont : make deepseek2 consistent

* cont : add TODO

* cont : special-case DSv2

* cont : revert Mistral 3 Large changes

* cont : fix DS2 to use the original attn_factor

* cont : minor comments
2025-12-12 17:12:40 +02:00
Sigbjørn Skjæret
dcb7d17758
cann : fix ops broken by circular padding guard (#17825) b7370 2025-12-12 15:49:27 +01:00
ixgbe
51604435e8
ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (#17951) b7369
* ggml-cpu:fix RISC-V Q4_0 repack select and RVV feature reporting

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>

* using the name VLEN instead of CNT

* Update ggml/include/ggml-cpu.h

---------

Signed-off-by: Wang Yang <yangwang@iscas.ac.cn>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-12-12 16:26:03 +02:00
Xuan-Son Nguyen
17158965ac
mtmd: explicitly forbidden inclusion of private header and libcommon (#17946) b7368 2025-12-12 15:16:06 +01:00
Aleksander Grygier
12280ae905
webui: Fix parsing non-LaTeX occurrencies of \( or \) (#17810)
* fix: Improve latex protection logic to prevent turning non-latex `\(` into `$`

* chore: update webui build output
2025-12-12 15:13:36 +01:00
Xuan-Son Nguyen
54a0fee4b7
arg: add -mm and -mmu as short form of --mmproj and --mmproj-url (#17958) b7366
* arg: add -mm and -mmu as short form of --mmproj and --mmproj-url

* correct order

* update docs
2025-12-12 14:06:06 +01:00
Daniel Bevenius
dada4c846d
model-conversion : remove max diff check in compare-logits [no ci] (#17954)
This commit removes the maximum difference check from the
compare-logits.py which would stop early if the difference between
the logits exceeded a threshold.

The motivation for removing this is that it can be useful to be able to
get the complete log for debugging/reporting purposes.
2025-12-12 13:25:16 +01:00
Adrien Gallouët
b8ee22cfde
common : add minimalist multi-thread progress bar (#17602) b7364
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-12-12 12:44:35 +01:00
Gustavo Rocha Dias
2eaa2c65cb
cmake: link ws2_32 for MinGW/w64devkit builds in cpp-httplib (#17949) b7363 2025-12-12 12:02:28 +01:00
yulo
c33a58bced
HIP: enable mmf for RDNA3 (#17879) b7362
* enable mmf for RDNA3

* disable mmf for some shape

* move some mmvf to mmf

* more mmfv to mmf

* 3 is good in mmvf

---------

Co-authored-by: zhang hui <you@example.com>
2025-12-12 11:34:33 +01:00
Pascal
a81a569577
Add a search field on model selector / improve mobile display (#17765) b7361
* webui: add search field to model selector and fixes mobile viewport overflow

* webui: simplify model search style and code

* refacor: Search Input component & consistent UI for Models Selector search

* feat: Use Popover component + improve interactions

* fix: Fetching props for only loaded models in ROUTER mode

* webui: prevent models selector popover from overflowing viewport

Use Floating UI's auto-positioning with 50dvh height limit and proper
collision detection instead of forcing top positioning. Fixes overflow
on desktop and mobile keyboard issues

* webui: keep search field near trigger in models selector

Place search at the 'near end' (closest to trigger) by swapping layout
with CSS flexbox order based on popover direction. Prevents input from
moving during typing as list shrinks

* chore: update webui build output

---------

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>
2025-12-11 18:21:21 +01:00
Piotr Wilkin (ilintar)
53ecd4fdb9
SOLVE_TRI extension to more dimensions (#17793) b7360
* Extended TRI

* Fix whitespace

* chore: update webui build output

* Just use cuBLAS for everything...

* Merge both versions

* Remove incorrect imports causing failures for CI

* Still failing... remove all direct cublas imports and rely on common imports from "common.cuh"

* Defines for hipBlas

* Aaaand MUSA defines...

* I hate this job...

* Stupid typo...

* Update ggml/src/ggml-cuda/solve_tri.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-12-11 17:20:43 +01:00
Georgi Gerganov
c6f6e4f96a
ggml-alloc : fix reuse-parent logic for misaligned sizes (#17884) 2025-12-11 14:30:10 +02:00
Georgi Gerganov
d9f8f60618
batch : fix sequence id ownership (#17915) b7358
* batch : fix sequence id ownage

* cont : reduce allocations
2025-12-11 14:29:47 +02:00