* feat: add mixed precision dot product implementation and function declaration
* feat: implement mixed precision vector dot product and conversion functions
* fix: update data type handling in matrix multiplication implementation
* fix: adjust row count handling in matrix multiplication implementation for accurate slicing
* fix: optimize matrix multiplication implementation by unroll loop
* update performance tracking for matrix multiplication implementation
* add fetching
* wip
* fix: support F16 * F32 multiplication in is_mul_mat_supported function
* fix: improve src0 fetching logic in vec_dot_product_mixed_impl for better alignment handling
* fix test failure for row width 67
* try fix failed test
* fix: rename aligned_address to align_down for clarity in vector alignment handling
* wip
* qnn fix: update device capabilities for quantized types in qnn-lib to improve compatibility
* fix test failure at width == 193
* fix: replace zero vector initialization with previous vector in mixed dot product implementation
* wip
* fix: improve handling of last vector in mixed dot product implementation
* wip
* wip
* wip
* wip
* Enhance mul_mat_f32 function to support quantized types and improve static assertions
* rename
* Refactor dequantization functions to use npu_device_fp16_t and improve type handling
* Optimize dequantization in dequantize_row_q8_0 by replacing qf32 multiplication with qf16
* Optimize dequantization in dequantize_row_q4_0 by replacing qf32 multiplication with qf16
* Add hvx_vsf_convert_vhf function for improved vector conversion
* add perf logs
* Refactor dequantize_row_q4_0 for alignment
* Update logging in supports_op_impl and supports_op to use ggml_op_desc for better clarity
* Add support for ROPE operation in NPU capabilities and related functions
* Implement ROPE operation in tensor and op_rope, including cache initialization and correction dimension calculations
* enable ROPE by adding operation validation
* add support to freq is null case
* wip
* Refactor rope_f32 to improve indexing by introducing total_planes calculation
* reformat
* Refactor rope_f32 to optimize data access patterns by introducing row and plane pointers
* Add performance tracking to rope_f32 function for enhanced profiling
* Refactor rope_f32 to use a templated implementation
* Refactor rope_impl to replace loop with memcpy for improved performance
* Refactor mul_mat_impl to support quantization as a template parameter
* wip
* wip
* Refactor rope_impl to optimize plane indexing in the processing loop
* Add aligned vector dot product implementation for mixed precision types
* wip
* Enhance matrix multiplication for F32 and F16 types with alignment checks
* Optimize vec_dot_product_mix_aligned_impl for improved performance with additional vector sums
* Add alignment checks for matrix multiplication and vector dot products
* Refactor matrix multiplication to use function pointers for improved readability and maintainability
* Fix alignment check in is_dot_product_aligned to ensure correct vector size handling
* Remove unused f16_to_f32_table parameter from quantization and dequantization functions
* wip
* Add L2 fetch for src1 plane rows in matrix multiplication implementation
* wip
* Refactor hvx_vsf_convert_vhf to accept an additional parameter for flexibility in vector multiplication
* Refactor vec_dot_product_mix_aligned_impl to improve variable naming for clarity
* Refactor load_dual_block_generic and dequantize_row_q4_0 to improve performance
* Refactor vector operation functions to improve clarity and consistency in variable usage
* wip
* wip
* Refactor dequantize_row_q4_0_impl for improved clarity and performance in vector operations
* wip
* Update load_dual_block_generic to use intrinsics
* Refactor load_dual_block_generic and load_qual_block_generic for improved performance and clarity
* wip
* wip
* Optimize dequantize_row_q8_0 for improved performance by unrolling for loop
* wip
* wip
* fix typo
* ggml : add ggml_scale_bias
* ggml_vec_mad1_f32
* add more simd
* add CUDA
* sycl
* vulkan
* cann (placeholder)
* opencl
* will this fix cpu?
* fix cuda
* suggestions from coderabbit
* fix cann compile error
* vDSP_vsmsa
* rm __ARM_FEATURE_SVE
* use memcpy for op params
* make code looks more consistent
* use scalar for __ARM_FEATURE_SVE
* add x param to ggml_vec_mad1_f32
* vulkan: allow FA split_k with smaller KV values
* vulkan: spread split_k_reduce work across more threads
k_num can get rather large. Use the whole workgroup to reduce the M/L values.
Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).
The fused operation was grabbing the epsilon value from the wrong place.
Add an env var to disable fusion.
Add some missing checks for supported shapes/types.
Handle fused rms_norm+mul in check_results.
* vulkan: Handle updated FA dim2/3 definition
Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.
* handle null mask for gqa
* allow gqa with dim3>1
* kv-cache : use ggml_set_rows
ggml-ci
* graph : separate k and v indices
ggml-ci
* cont : remove redundant ifs
ggml-ci
* kv-cache : improve find_slot impl
* kv-cache : bounds-check when accessing slot_info indices
* kv-cache : add comments
ggml-ci
* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends
ggml-ci
* llama : initial Mamba-2 support
* ggml : SIMD ggml_ssm_scan for Mamba-2
* ggml : improve ggml_mul speed when masking recurrent states
* llama : support running Mamba-Codestral-7B-v0.1
* llama : fix Mamba-2 conv state saving
* ggml : make the ggml_mul fast broadcast path more consistently formatted
* llama : remove unused variable
* llama : add missing break
* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present
The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.
* llama : avoid redundant state copy for Mamba 1 and 2
* metal : attempt to adapt SSM_SCAN for Mamba-2
* metal : fix SSM_SCAN pipeline scope
* metal : use log and exp instead of log1pf and expf in SSM_SCAN
* metal : remove unused arguments for SSM_SCAN
The max index is 31, so trimming the arguments is necessary.
* metal : add back n_seqs to SSM_SCAN args
Whoops, this is needed for the offset in the concatenated output.
* metal : fix SSM_SCAN state head offset
* metal : fix wrong number of tokens per sequence in SSM_SCAN
* ggml : remove unused fast broadcast path in GGML_MUL
This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.
* ggml : avoid multiply by D in GGML_OP_SSM_SCAN
This makes the weight buft detection in src/llama.cpp simpler.
* convert : transpose Mamba-2 A, D and reshape SSM_NORM
This breaks existing conversions of Mamba-2 models
to avoid some reshapes.
Not sure if it's a good idea,
but it makes the graph slightly cleaner.
* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks
* convert : fix flake8 lint
* metal : fix confusion between ; and ,
* metal : add missing args for nb references in ssm_scan_f32_group
* metal : single-user mamba2 inference works
* kv-cache : remove const_cast when setting inputs for s_copy
And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.
* convert : avoid AutoConfig for Mamba and Mamba2 hparams
* kv-cache : allow context shift for recurrent models
* graph : fix recurrent state copies when avoiding copies
Works, but using lambda functions might not be that clean.
* ggml : fix mamba2 ssm scan when compiled with SVE
* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches
* cuda : implement ssm scan for Mamba2
There is still room for improvement, but it works!
* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2
* mamba : fix mismatched new and delete size for llm_build_mamba
Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON
* cuda : graceful fallback for Mamba-1 models with weird embd size
* ggml : add version function to get lib version
This commit adds a function `ggml_version()` to the ggml library that
returns the version of the library as a string.
The motivation for this is that it can be useful to be able to
programmatically check the version of the ggml library being used.
Usage:
```c
printf("GGML version: %s\n", ggml_version());
```
Output:
```console
GGML version: 0.0.2219
```
* ggml : add ggml_commit()
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* CUDA: add softmax broadcast
* Pass by const ref
* Review: Use blockDims for indexing, remove designated initializers
* Add TODO for noncontigous input/output
* Add a callback that will be called just before abort. This allows apps without a console to display a message to the user and save data if needed.
* Return previous callback to allow callback chaining
* style fixes
---------
Co-authored-by: Diego Devesa <slarengh@gmail.com>
* add "align corners" mode for bilinear upscale, and allow downscaling
* add ggml_interpolate, deprecate ggml_upscale_ext, pass in align-corners as bit-flag
* test-backend-ops: replace ggml_upscale_ext with ggml_interpolate, add test cases for downscale and align-corners
This commit renames the variable `best_mad` to `best_error` in the
`make_qkx2_quants` function.
The motivation for this is that the name `best_mad` can be somewhat
confusing if mean absolute deviation (MAD) is not in use.
* Conv2D: Add CPU version
* Half decent
* Tiled approach for F32
* remove file
* Fix tests
* Support F16 operations
* add assert about size
* Review: further formatting fixes, add assert and use CPU version of fp32->fp16