* Add power management utilities to NPU device context and update DCVS settings
* Update DCVS settings in power_utils to use v3 API and enhance power management
* wip
* Enhance dequantization functions by adding load_dequant_table support and updating signatures for improved performance
* use lut
* wip
* fix test failure
* wip
* Refactor load_qual_block_generic to improve block handling and optimize vector operations
* Enhance load_dual_block_generic and load_qual_block_generic to accept a mask parameter for improved block handling
* Refactor flash_attn_impl to optimize mask l2 prefetch
* wip
* wip
* wip
* wip
* add log
* link against shared libraries instead of static ones
* fix swiglu
* wip
* refactor expf_fix to handle overflow for different data types
* enhance is_glu_op_supported to validate shapes for multiple sources
* wip
* refactor logging macros to use hexagon namespace and improve formatting
* fix printf format error
* wip
* refactor: update static_assert messages for block size validation and add HVX_VectorPred_x3 type alias
* rename
* feat: enhance fa with mask
* wip
* wip
* refactor: replace instances of Q6_V_vzero() with kZeroV for consistency
* wip
* wip
* wip
* fix: improve address alignment check in HVX_Vector handling
* refactor: streamline vector dot product implementations for improved readability
* refactor: q4k add hvx intrinsic impl
* refactor: enhance dequantize_row_q4_K for clarity and performance
* refactor: optimize scale mask usage in dequantization functions for improved performance
* refactor: optimize dequantize_row_q4_K for intrinsic usage and performance improvements
* refactor: move GLU operation implementation into separated file
* sync after swiglu
* wip
* wip
* wip
* feat: increase prc main thread stack size
* fix: replace hardcoded stack size with NPU_THREAD_STACK_SIZE constant
* wip
* feat: add optimized vector operations for exponential and division with overflow handling
* wip
* feat: refactor exponential function to handle overflow and underflow with improved logic
* wip
* wip
* feat: add vector loading and scaling functions for improved performance in block processing
* wip
* feat: optimize block loading by refactoring scale index handling for improved performance
* use Q6_Vb_vlut32_VbVbR_nomatch instead
* feat: enhance scale loading by adding static assertion and restructuring block handling
* wip
* feat: refactor vec_dot_product_mixed_impl for improved clarity and performance
* wip
* feat: simplify vector loading functions and improve alignment handling
* wip
* feat: enhance scale loading mask with quantization block size validation
* wip
* feat: implement make_scale_load_mask function and refactor vector handling in vec_ops
* feat: enhance load_dual_block_generic to include scale indices for improved vector loading
* revert q8 dequant
* wip
* feat: optimize dequantization functions by removing unnecessary masking and updating lookup methods
* wip
* wip
* add qurt_mutex
* Add DMA transfer class and integrate into thread pool
* Enhance DMA transfer functionality by adding support for multiple descriptors and initiating transfers in parallel
* fix dma crash
* fix failed unit tests
* wip
* use alignas
* Improve DMA transfer error handling and update descriptor completion check
* Fix VTCM cache size calculation in element-wise operations
* Add cache clean operations before DMA transfers in element-wise operations
* reduce cache clean operations
* Refactor DMA transfer functions to support 1D operations and rename for clarity
* Enhance DMA transfer functionality by adding 2D submission support and improving descriptor initialization
* Update read buffer method to support forced invalidation and remove unnecessary invalidation calls in element-wise operations
* wip
* Improve DMA transfer handling in mul_mat_gemv_impl by replacing memcpy with initiate_dma_row_transfer and adding wait_for_dma logic
* fix 2d dma
* feat: add DMA plane cache
* rename
* wip
* use memcpy for debug
* fix cache plane calc
* refactor: remove debug logging from mul_mat_impl and optimize cache handling
* rename
* fix 2d dma type
* refactor: enhance DMA transfer handling in mul_mat_gemv_impl and wait functions
* refactor: optimize DMA transfer handling in mul_mat_gemv_impl and wait functions
* wip
* wip
* move op impl into sub dir
* add log
* fix: correct pointer usage in mul_mat_gemv_impl for next plane access
* fix: improve DMA transfer error handling in mul_mat_impl and mul_mat_gemv_impl
* fix: fix crash by using the entire row bytes
* wip
* wip
* fix: prevent parallelization for scalar src1 in is_mul_mat_supported
* fix: add dimension checks for 2D DMA transfers and fallback to 1D if necessary
* wip
* fix: enable thread barrier for mul multiplication operations
* feat: add synchronization checks for tensor operations and update related functions
* wip
* fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations
* Revert "fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations"
This reverts commit af3441e67e706b2e5122369dc160353796867dd3.
* wip
* wip
* add comment
* fix: improve DMA transfer handling in mul_mat_gemv_impl for quantized source tensors
* add log
* try fix mulmat gemv
* wip
* fix: enhance DMA transfer handling in mul_mat_gemv_impl for quantized source tensors
* fix: optimize cache offset calculation and remove redundant swap in mul_mat_gemv_impl
* fix: refactor DMA transfer handling in mul_mat_gemv_impl for improved clarity and maintainability
* wip
* wip
* wip
* fix: enhance mul_mat_impl for improved cache handling and clarity
* fix: refactor tensor unflattening and DMA transfer initialization for improved clarity and type safety
* fix: improve cache handling of quant
* wip
* fix: improve cache handling in mul_mat_impl and mul_mat_gemv_impl for better memory efficiency
* rename
* add load_hexa_block_generic
* wip
* extract dequant block into separated function
* refactor: enhance dequantization functions with table parameter
* fix load_dual_block_generic
* refactor: rename dequantization functions for clarity and enhance block handling
* refactor: simplify dequantization logic by consolidating block handling and removing unused parameters
* wip
* wip
* feat: Add granite-docling conversion using trillion pretokenizer
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add granite-docling vocab pre enum
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Use granite-docling pre
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add clip_is_idefics3
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Allow multi-token boundary sequences for image templating
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add tiling support for idefices3 in clip.cpp
This should likely be moved into llava_uhd::get_slice_instructions, but for
now this avoids disrupting the logic there.
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Partial support for full templating for idefics3 in mtmd
There are still errors encoding some of the image chunks, but the token
sequence now matches transformers _almost_ perfectly, except for the double
newline before the global image which shows up as two consecutive newline
tokens instead of a single double-newline token. I think this is happening
because the blocks are tokenized separately then concatenated.
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Fully working image preprocessing for idefics3 w/ resize and slicing
Branch: gabe-l-hart/GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Parse the preprocessor config's longest side and add it to the mmproj hparams
Branch: GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Use the longest side instead of size * scale_factor
For Granite Docling, these come out to the same value, but that was just a
conicidence.
Branch: GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Allow batch encoding and remove clip_is_idefics3
Branch: GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Remove unnecessary conditionals for empty token vectors
Branch: GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Use image_manipulation util
Branch: GraniteDocling
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* add test model
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
* rpc : add support for multiple devices
Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.
closes: #15210
* fixes
* use ggml_backend_reg_t
* address review comments
* fix llama-bench backend report
* address review comments, change device naming
* fix cmd order
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times
* support dep-files so shaders are recompiled if their included files change
* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled
* vulkan : only write embedded shader .hpp/.cpp when they change
* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier
* fix hang in vulkan-shaders-gen when there are compilation errors
* early out did not decrement compile_count
* clean up
* fix glslc integer dot product test
* unconditionally write the embedded shader cpp output
* replace output filepath in generated dep-files to match output in CMakeLists
---------
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
* feat: added a dedicated Magistral chat format that preserves [THINK] spans, parses reasoning before tool calls
* feat: new flow in the chat template test suite for Magistral
* initial commit for branch 3
* generalize `swa_checkpoint` to `ctx_checkpoint`
this extends `llama-server`'s SWA checkpointing logic to include
hybrid/recurrent models such as Jamba, Granite
* oops
* disable debug prints
* keep backwards compat with `--swa-checkpoints`
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* update prompt re-processing message
* fix off-by-one error per GG
* keep `seq_rm` log per GG
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* server : fix checkpoint logic to support recurrent caches
* server : cleanup and fixes
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE
Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.
For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.
Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.
With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.
* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull
When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.
* feat: Capture model name only after first token (streaming) or completed request (non-streaming)
* chore: update webui build output
* chore: update webui build output
* fix: Include just the currently active message branches instead of all in chat completions request
* chore: Build webui static output
* chore: Formatting
* chore: update webui build output
* do not use more threads than physically available
* ensure n_threads > 0
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
---------
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
* First attempt
* No permute during convert (fixes qk tensors), proper norm application.
* RoPE = NeoX
* Coherence!
* Migrate xielu params from tensors to hyperparameters
* Simple CUDA kernel
* Revert stupid LLM refactorings
* Chat template support
* configchecker / flake8 errors
* Reorder unary.cu
* I do conclude that LLMs are, in fact, stupid.
* Fix after merge
* Final newline
* Make xIELU an UNARY_OP
* Final newline
* Correctly account for parameter shift
* Argh.
* Update ggml/src/ggml-cpu/unary-ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Refactor: remove unused methods, inline and factorize softplus, add const modifiers
* Revert CUDA changes, implement xIELU as a separate OP
* Pesky newline
* Add float2half / half2float for F16 inputs/outputs
* CUDA variants, attempt 2
* Actually, attempt 3
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Missing convert header
* Proper formula and reference for xIELU in the comments.
* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Add tensor mappings for Apertus to global list instead
* Fix lazy on scalars
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Add comment about the constraints on positive/negative alpha
* Change `softplus` to `ggml_softplus`
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* update oneapi to 2025.2, use deep-learning-essentials to replace base-tool
* update to 2025.2 use deeplearn essi to replace base toolkit
* add missed dll
* add deep learning essentials
* add sycl-ls
---------
Co-authored-by: Zhang Jianyu <zhang.jianyu@outlook.com>
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0
rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA
* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
* Fix to use hidden_size_per_head
* Fix num heads
* Fix array
* Fix loading weights
* Support old GGUF converted by the previous version of llama.cpp
* Update src/llama-model.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Move shared parameter definitions to the outside of loop
* Not calculating n_embd_head_k,v by n_embd / n_head
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* CI: Properly install rocwmma for hip builds
on windows we now windows install rocwmma from ubuntu pacakges
* CI: update linux rocm docker build to use rocm 7.0
* common: introduce http.h for httplib-based client
This change moves cpp-httplib based URL parsing and client setup into
a new header `common/http.h`, and integrates it in `arg.cpp` and `run.cpp`.
It is an iteration towards removing libcurl, while intentionally
minimizing changes to existing code to guarantee the same behavior when
`LLAMA_CURL` is used.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* tools : add missing WIN32_LEAN_AND_MEAN
Signed-off-by: Adrien Gallouët <adrien@gallouet.fr>
---------
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
Signed-off-by: Adrien Gallouët <adrien@gallouet.fr>
* feat: Add a setting to include model name used to generate the message
* feat: UI improvements
* feat: Save model info along with the database message entry creation
* chore: Build webui static output
* Make a few GLM tensors not required
layer.nextn.shared_head_head and layer.nextn.embed_tokens are both excluded from GLM 4.6 resulting in the model not loading after conversion/quantization, this marks those tensors as not required which makes it work
* Update llama-model.cpp
layer.nextn.shared_head_norm also not required in case of future models
* Work on rope
* Simplify inplace operation generation and combine mul/add generation
* Work on rope variants
* implement neox rope
* rope complete
* Add sub,div,glu operators
* implement scale op
* Update cpy shader to handle cont/more types
* formatting
* Update test vars printing for rope,rms_norm
* Avoid ROPE hardcoded constants
* Add TODO to change ROPE constants to enum
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fix TODO comment
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>