Commit Graph

7837 Commits

Author SHA1 Message Date
zhanmyz cff473a9e2 1. All operators implemented using OpenVINO can be successfully executed individually.
2. VIEW op output tensor shape is not same with CONT(non-contiguous) input tensor shape
3. CPY(non-contiguous) can't be implemented with original input/output tensor shape and data(need change the original shape when create input/output tensor)

Currently. VIEW op executed in the ggml backend and others executed in the OpenVINO Frontend.
2026-01-15 10:05:41 -08:00
zhanmyz e08a7fda33 All adjacent ops can conversion but calculation result is wrong and need debugging 2026-01-15 10:05:41 -08:00
zhanmyz d05c458421 change CONT and MULMAT input node shape 2026-01-15 10:05:41 -08:00
zhanmyz 246a2d1021 Change the input and ouput node shape of MUL_MAT operator 2026-01-15 10:05:41 -08:00
zhanmyz f37fa21a5c Change the input and ouput node shape of MUL_MAT operator 2026-01-15 10:05:41 -08:00
zhanmyz f98d215162 Change the input parameter shape of CONT operator 2026-01-15 10:05:41 -08:00
zhanmyz 9a7b7d8d6d OV Frontend supports GET_ROWS/RMS_NORM/MUL/MUL_MAT/ROPE/SCALE/SOFTMAX/ADD adjacent op graph conversion 2026-01-15 10:05:41 -08:00
zhanmyz 95ae982d59 OV Frontend supports GET_ROWS/RMS_NORM/MUL/MUL_MAT graph conversion of consecutive OPs 2026-01-15 10:05:41 -08:00
zhanmyz 901f7347ff Execute CONT & VIEW operators in OV Frontend is OK 2026-01-15 10:05:41 -08:00
zhanmyz 081b52667b Execute singel CONT operator is OK 2026-01-15 10:05:41 -08:00
zhanmyz afb8594194 add tmp source code files 2026-01-15 10:05:41 -08:00
zhanmyz 57582fda39 add implementation of CPY when the output tensor is non-contiguous 2026-01-15 10:05:41 -08:00
zhanmyz 8484769981 add implementation of MUL_MAT, CPY, CONT of GGML ops using OV ops 2026-01-15 10:05:41 -08:00
zhanmyz cb2729bc4a Move CPY from GGML OV Backend to OV Frontend 2026-01-15 10:05:41 -08:00
zhanmyz 2b04bd43be Add MUL_MAT,CPY,CONT as operators implemented in OpenVINO for GGML backend 2026-01-15 10:05:41 -08:00
zhanmyz 0f7d07de7d Add support for RMS_NORM OP 2026-01-15 10:05:41 -08:00
yumengbo 2353c73f53 Support ROPE op. 2026-01-15 10:05:41 -08:00
yumengbo 8aba03bac6 Support Softmax op 2026-01-15 10:05:41 -08:00
yumengbo d218c61e6d Support Softmax op 2026-01-15 10:05:41 -08:00
yumengbo 590f587b27 Add support for UNARY SILU op . Fix pytorch impl bugs. 2026-01-15 10:05:41 -08:00
yumengbo b100f89bad Change to implementation following pytorch frontend 2026-01-15 10:05:41 -08:00
yumengbo e95f29cbc0 Fix issue for output memory copy of infer request 2026-01-15 10:05:41 -08:00
zhanmyz 8c5a609f8d add the rms_norm operator implemented using OpenVINO to the GGML backend of llama.cpp 2026-01-15 10:05:41 -08:00
zhanmyz 80c330a469 Update build.md and add operation mapping(GGML to OpenVINO) 2026-01-15 10:05:41 -08:00
zhanmyz 49804f43fc add GET_ROWS operator of OpenVINO to GGML of llama.cpp 2026-01-15 10:05:41 -08:00
yumengbo 5b46dc23be Change output for infer request to set output tensor. Support scale, view op. 2026-01-15 10:05:41 -08:00
yumengbo 31bd816426 Add GGML_OV_FRONTEND option. Add readme. 2026-01-15 10:05:41 -08:00
yumengbo 9b7b63d12c Convert subgraph with add, sub, mul, div op to ov model and do infer on openvino device 2026-01-15 10:05:41 -08:00
yumengbo 34e826ac14 Implement GgmlOvDecoder. Add dump functions. 2026-01-15 10:05:41 -08:00
yumengbo 171c4681f4 Add PoC of integration of openvino frontend. Main changes: ggml-ov-frontend-utils, GraphIterator, Decoder 2026-01-15 10:05:41 -08:00
zhanmyz ee31dc1c1b add get openvino available ops function 2026-01-15 10:05:41 -08:00
zhanmyz 77d68146a8 add OpenVINO frontend convert process steps 2026-01-15 10:05:41 -08:00
zhanmyz 0a81aa19f7 Add compile options 2026-01-15 10:05:40 -08:00
zhanmyz adc2c70f44 Add OpenVINO MUL operator to GGML of Llama.cpp. 2026-01-15 10:05:40 -08:00
zhanmyz faa4a7de76 Solve the issue of abnormal model output caused by using OpenVINO ADD operator 2026-01-15 10:05:40 -08:00
zhanmyz 9b9d51dddf * Configure the device(default CPU) that uses OpenVINO to compile the model
* Add OpenVINO ADD operator to Llama.cpp. The output is somewhat abnormal and needs further debugging.
2026-01-15 10:05:40 -08:00
zhanmyz 5294402b50 add openvino as optional backend for Llama.cpp ggml 2026-01-15 10:05:40 -08:00
Yanglei Zou fe5720e684 Add ggml-openvino base files 2026-01-15 10:05:40 -08:00
Georgi Gerganov be8e3d9515
context : do not reserve scheduler for warmups (#18867) 2026-01-15 19:35:57 +02:00
ddh0 13f1e4a9ca
llama : add adaptive-p sampler (#17927)
* initial commit for branch

* simplify constants

* add params to `struct common_params_sampling`, add reference to PR

* explicitly clamp `min_target` and `max_target` to `[0.0, 1.0]`

* add args, rename `queue_size` -> `window_size`

* improved comments

* minor

* remove old unused code from algorithm

* minor

* add power law case to `common_sampler_init`, add sampler name mappings

* clarify behaviour when `window_size = 0`

* add missing enums

* remove `target_range` param, make `target == 1` no-op, cleanup code

* oops, straggler

* add missing parameters in `server-task.cpp`

* copy from author

ref:
https://gist.github.com/MrJackSpade/9be99c7efbba7b95a41377e123b7b069

* remove old debug log, style nit

* fix compiler warning, add commented-out logging per token

* re-write + change parameters + simplify

* oops forgot args.cpp

* fix leftover `window_size`

* add missing values to `common_params_sampling::print()`

* with logging

* does this fix it?

* no, but does this?

* update default decay

* optimize

* fix bad merge

my git skills are lacking

* silence `missing initializer for member`

* update default decay to 0.9

* fix logging

* format (double)

* add power law to the new `samplers` vector

* log sampler init values

* improve logging messages in llama_sampler_power_law

* remove extraneous logging

* simplify target computation

last commit with debug logging!

* remove debug logging, explicitly clamp params at init

* add `use_power_law` flag + logic, minor cleanup

* update `power-law` -> `adaptive-p`

* fix cold start EMA

- `ctx->weighted_sum` is now initialized and reset to `target / (1.0f -
clamped_decay)`
- `ctx->total_weight` is now initialized and reset to `1.0f / (1.0f -
clamped_decay)`

this fixes a "cold start" problem with the moving average

* update `SHARPNESS` constant to `10.0f`

* minor style fixes

no functional changes

* minor style fixes cont.

* update `llama_sampler_adaptive_p_i` for backend sampling (ref: #17004)

* separate into `apply` + `accept` functions

* `pending_token_idx`: switch from `llama_token` to `int32`

functionally identical (`llama.h` has `typedef int32_t llama_token;`),
but its more correct now

* don't transform logits <= -1e9f

* fix masking in backend top-p, min-p

* address review comments

* typo in comments `RND` -> `RNG`

* add docs

* add recommended values in completion docs

* address PR feedback

* remove trailing whitespace (for CI `editorconfig`)

* add to adaptive-p to `common_sampler_types_from_chars`
2026-01-15 19:16:29 +02:00
Xuan-Son Nguyen a04c2b06a3
server: improve slots scheduling for n_cmpl (#18789)
* server : make sure children tasks are scheduled to launch with parent

* fix

* add comment pointing to this PR

* fix

* clean up

* more debug messages

* add pop_deferred_task with specific ID version

* improve the logic

* simple approach

* no double move

* correct return type of launch_slots_with_parent_task
2026-01-15 17:10:28 +01:00
Georgi Gerganov 39173bcacb
context : reserve new scheduler when graph topology changes (#18547)
* context : reserve new scheduler when graph topology changes

* cont : fix

* cont : fix reserve

* cont : reserve only when changes occur + timing

* context : add comments

* llama : reserve on sampler changes

* common : allow null common_sampler

* server : task declares needs (embd, logits, sampling)

* server : do not init sampler if not needed

* llama : fix need_reserve when unsetting a sampler

* server : consolidate slot reset/clear logic
2026-01-15 16:39:17 +02:00
Johannes Gäßler 5c662d21a3
CUDA: fix allignment on register spill for FA (#18815) 2026-01-15 15:14:50 +01:00
shalinib-ibm 8cc0ba957b
ggml-cpu: optimize ggml_vec_dot_bf16 for Power9 (#18837) 2026-01-15 17:31:18 +08:00
Xuan-Son Nguyen a7e6ddb8bd
lora: make sure model keep track of associated adapters (#18490)
* lora: make sure model keep track of associated adapters

* deprecate llama_adapter_lora_free

* minor : std::unordered_set over std::set

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-15 10:24:28 +01:00
Sigbjørn Skjæret 2a13180100
model-loader : support bool array sliding window pattern (#18850) 2026-01-15 10:12:46 +01:00
Adrien Gallouët ec997b4f2b
tests : download models only when running ctest (#18843)
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-15 09:47:29 +01:00
Max Krasnyansky cff777f226
hexagon: support for OP_CPY, host buffers now optional, hvx-utils refactoring and optimizations (#18822)
* hexagon: disable repack buffers if host buffers are disabled, improved handling of env vars

* hexagon: add support for OP_CPY fp16/fp32 -> fp16/fp32

Factore out all hvx_copy functions into hvx-copy.h header and reduced code duplication.
Update HTP ops infra to support OP_CPY

* hexagon: cleanup and refactor hex/hvx/htp headers and helper libs

hex is basically all scalar/core platform stuff (L2, DMA, basic utils)
hvx is all hvx related utils, helpers, etc
htp is higher level stuff like Ops, etc

hvx-utils library got a nice round of cleanup and refactoring to reduce duplication

use hvx_vec_store_a where possible

* hexagon: refactor HVX sigmoid functions to hvx-sigmoid.h

Moved sigmoid and tanh vector functions from hvx-utils.h to a new header
hvx-sigmoid.h. Implemented aligned and unaligned variants for sigmoid
array processing using a macro pattern similar to hvx-copy.h. Updated
act-ops.c to use the new aligned variant hvx_sigmoid_f32_aa. Removed
unused hvx-sigmoid.c.

* hexagon: factor out hvx-sqrt.h

* hexagon: mintor update to hvx-utils.h

* hexagon: remove spurios log

* hexagon: factor out and optimize hvx_add/sub/mul

* hexagon: remove _opt variants of add/sub/mul as they simply fully aligned versions

* hexagon: refactor reduction functions to hvx-reduce.h

Moved `hvx_self_max_f32` and `hvx_self_sum_f32` from `hvx-utils.h`/`.c` to `hvx-reduce.h`.
Renamed them to `hvx_reduce_max_f32` and `hvx_reduce_sum_f32`.
Added aligned (`_a`) and unaligned (`_u`) variants and used macros to unify logic.
Updated `softmax-ops.c` to use the new functions.

* hexagon: refactor the rest of arithmetic functions to hvx-arith.h

Moved `hvx_sum_of_squares_f32`, `hvx_min_scalar_f32`, and `hvx_clamp_scalar_f32` from `hvx-utils.c/h` to `hvx-arith.h`. Implemented aligned/unaligned variants (`_aa`, `_au`, etc.) and used macros to reduce code duplication. Updated `hvx_min_scalar_f32` and `hvx_clamp_scalar_f32` to use `dst, src, ..., n` argument order. Updated call sites in `act-ops.c`.

Refactor Hexagon HVX arithmetic functions (min, clamp) to hvx-arith.h

Moved `hvx_min_scalar_f32` and `hvx_clamp_scalar_f32` from `hvx-utils.c/h` to `hvx-arith.h`. Implemented aligned/unaligned variants (`_aa`, `_au`, etc.) and used macros to reduce code duplication. Updated these functions to use `dst, src, ..., n` argument order and updated call sites in `act-ops.c`. `hvx_sum_of_squares_f32` remains in `hvx-utils.c` as requested.

* hexagon: refactor hvx_sum_of_squares_f32

- Modify `hvx_sum_of_squares_f32` in `ggml/src/ggml-hexagon/htp/hvx-reduce.h` to use `dst, src` signature.
- Implement `_a` (aligned) and `_u` (unaligned) variants for `hvx_sum_of_squares_f32`.
- Update `hvx_reduce_loop_body` macro to support both returning and storing results via `finalize_op`.
- Update existing reduction functions in `hvx-reduce.h` to use the updated macro.
- Update `rms_norm_htp_f32` in `ggml/src/ggml-hexagon/htp/unary-ops.c` to match the new signature.

* hexagon: use hvx_splat instead of memset

* hexagon: consistent use of f32/f16 in all function names to match the rest of GGML

* hexagon: fix hvx_copy_f16_f32 on v75 and older

* hexagon: update readme to include GGML_HEXAGON_EXPERIMENTAL

* scripts: update snapdragon/adb scripts to enable host param
2026-01-14 21:46:12 -08:00
Oliver Simons 36f0132464
CUDA: Factor out and re-use `block_reduce` function (#18785)
* CUDA: Refactor and expose two_stage_warp_reduce_* function

* Use `two_stage_warp_reduce` also in softmax kernel, move smem out of it

Moving smem out of `__device__` function to `__global__` function
allows for explicit smem reuse, as either compiler or cuda rt seem to not
free it afterwards (`cudaFuncSetAttribute` fails when not accounting for
it once for each call to two_stage_warp_reduce)

* Update ggml/src/ggml-cuda/common.cuh

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* Use two_stage_warp_reduce in group_norm_f32

* Use two_stage_warp_reduce in rms_norm_f32

* Fix smem calculation which expects bytes

* Make `two_stage_warp_reduce` accept all values warp_reduce accepts

Also integrate it into norm_f32 function

* Use two_stage_warp_reduce in l2_norm_f32

* Use type traits for block reduction for better legibility

Also adresss other requests by @am17an such as variable renaming

* Make norm tests cover all cuda paths

* Mark columns % WARP_SIZE !=0 as supported for RMS_NORM_BACK

Unit-tests passed locally, let's see if they pass in the CI as well

* Use `enum class` for `block_reduce_method`

This is more type-safe than plain enum

* Rename variables as suggested in code review by @am17an

* Rename two_stage_warp_reduce -> block_reduce

* Fix trailing whitespace in common.cuh

* Make condition of static_assert type-dependent

This delays evaluation until the template is actually instantiated.
Otherwise, some compilers may evaluate the assert when parsing the
template, resulting in build errors as observed here:

https://github.com/ggml-org/llama.cpp/actions/runs/20960323123/job/60235530068?pr=18785

* Inline definitions

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
2026-01-15 10:44:54 +08:00
Piotr Wilkin (ilintar) d98b548120
Restore clip's cb() to its rightful glory - extract common debugging elements in llama (#17914)
* Extract common debugging functions; plug eval-callback and mtmd's MTMD_DEBUG_GRAPH with same functionality

* Move to common

* Remove unneeded header

* Unlink from common

* chore: update webui build output

* Cleanup; properly pass params to mtmd without depending on common; factorize debug.cpp to use common debug code.

* Revert change to webapp

* Post-merge adjust

* Apply suggestions from code review

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Apply code review changes

* Remove changes to server-context

* Remove mtmd.h include

* Remove utility functions from header

* Apply suggestions from code review

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Rename functions

* Update tools/mtmd/clip.cpp

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Update tools/mtmd/clip.cpp

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Update tools/mtmd/clip.cpp

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

---------

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2026-01-14 20:29:35 +01:00