* tests: add end-to-end tests per model architecture
* fixup for rebase
* fix use-after-free in llama-model-loader.cpp
* fix CI
* fix WebGPU
* fix CI
* disable CI for macOS-latest-cmake-arm64
* use expert_weights_scale only if != 0.0f
* comments
* hexagon: refactor set/get/sum-rows ops to use local context
* hexagon: refactor ROPE and Softmax Ops to use local context
Improves performance a bit by precomputing things and saving in the context.
* hexagon: refactor activation ops to use local context struct
* hexagon: refactor unary ops to use local context struct and DMA/VTCM
* hexagon: use aligned hvx_scale function
* hexagon: remove unused fields from op_context
* hexagon: rewrite ROPE to use DMA and VTCM scratchpad
* hex-rope: keep N rows in scratchpad (instead of just two)
* hex-rope: introduce rowidx cache
* hex-rope: remove unused fields
* hex-rope: rewrite dma prefetch logic to allow for multi-row fetch/compute
also removes the need for fastdiv.
* hex-rope: minor formatting
* hex-rope: use indices and unroll the loops
* hex-rope: more updates to cleanup rope-block handling
* hexagon: cleanup supported type/dims checks
* hexagon: all reduce funcs replicated across lanes
There is no need to explicitly replicate the first value.
* snapdragon: update adb and windows scripts to use ubatch-size 256
Updated Ops support handles larger ubatches.
This option was introduced as a workaround because cpp-httplib could not
build on visionOS. Since it has been fixed and now compiles on all platforms,
we can remove it and simplify many things.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* hexagon: disable repack buffers if host buffers are disabled, improved handling of env vars
* hexagon: add support for OP_CPY fp16/fp32 -> fp16/fp32
Factore out all hvx_copy functions into hvx-copy.h header and reduced code duplication.
Update HTP ops infra to support OP_CPY
* hexagon: cleanup and refactor hex/hvx/htp headers and helper libs
hex is basically all scalar/core platform stuff (L2, DMA, basic utils)
hvx is all hvx related utils, helpers, etc
htp is higher level stuff like Ops, etc
hvx-utils library got a nice round of cleanup and refactoring to reduce duplication
use hvx_vec_store_a where possible
* hexagon: refactor HVX sigmoid functions to hvx-sigmoid.h
Moved sigmoid and tanh vector functions from hvx-utils.h to a new header
hvx-sigmoid.h. Implemented aligned and unaligned variants for sigmoid
array processing using a macro pattern similar to hvx-copy.h. Updated
act-ops.c to use the new aligned variant hvx_sigmoid_f32_aa. Removed
unused hvx-sigmoid.c.
* hexagon: factor out hvx-sqrt.h
* hexagon: mintor update to hvx-utils.h
* hexagon: remove spurios log
* hexagon: factor out and optimize hvx_add/sub/mul
* hexagon: remove _opt variants of add/sub/mul as they simply fully aligned versions
* hexagon: refactor reduction functions to hvx-reduce.h
Moved `hvx_self_max_f32` and `hvx_self_sum_f32` from `hvx-utils.h`/`.c` to `hvx-reduce.h`.
Renamed them to `hvx_reduce_max_f32` and `hvx_reduce_sum_f32`.
Added aligned (`_a`) and unaligned (`_u`) variants and used macros to unify logic.
Updated `softmax-ops.c` to use the new functions.
* hexagon: refactor the rest of arithmetic functions to hvx-arith.h
Moved `hvx_sum_of_squares_f32`, `hvx_min_scalar_f32`, and `hvx_clamp_scalar_f32` from `hvx-utils.c/h` to `hvx-arith.h`. Implemented aligned/unaligned variants (`_aa`, `_au`, etc.) and used macros to reduce code duplication. Updated `hvx_min_scalar_f32` and `hvx_clamp_scalar_f32` to use `dst, src, ..., n` argument order. Updated call sites in `act-ops.c`.
Refactor Hexagon HVX arithmetic functions (min, clamp) to hvx-arith.h
Moved `hvx_min_scalar_f32` and `hvx_clamp_scalar_f32` from `hvx-utils.c/h` to `hvx-arith.h`. Implemented aligned/unaligned variants (`_aa`, `_au`, etc.) and used macros to reduce code duplication. Updated these functions to use `dst, src, ..., n` argument order and updated call sites in `act-ops.c`. `hvx_sum_of_squares_f32` remains in `hvx-utils.c` as requested.
* hexagon: refactor hvx_sum_of_squares_f32
- Modify `hvx_sum_of_squares_f32` in `ggml/src/ggml-hexagon/htp/hvx-reduce.h` to use `dst, src` signature.
- Implement `_a` (aligned) and `_u` (unaligned) variants for `hvx_sum_of_squares_f32`.
- Update `hvx_reduce_loop_body` macro to support both returning and storing results via `finalize_op`.
- Update existing reduction functions in `hvx-reduce.h` to use the updated macro.
- Update `rms_norm_htp_f32` in `ggml/src/ggml-hexagon/htp/unary-ops.c` to match the new signature.
* hexagon: use hvx_splat instead of memset
* hexagon: consistent use of f32/f16 in all function names to match the rest of GGML
* hexagon: fix hvx_copy_f16_f32 on v75 and older
* hexagon: update readme to include GGML_HEXAGON_EXPERIMENTAL
* scripts: update snapdragon/adb scripts to enable host param
This commit introduces a mechanism to embed all licenses directly
into the compiled binaries.
This eliminates the need to distribute separate LICENSE files alongside
the executable, making the binaries self-contained and simplifying
deployment.
* hexagon: improve fp16 matmul and add fp32/fp16 flash-attention
* hexagon: add support for set-rows fp32 -> fp16 with i32/i64 row-idx
* hexagon: add support for SCALE fp32
* hexagon: replace scalar fp32 -> fp16 copy with HVX
* hexagon: optimize flash_atten_ext with aligned VTCM buffers and DMA
- Implements double-buffered DMA prefetching for K, V, and Mask tensors.
- Ensures K and V rows in VTCM are padded to 128 bytes to support aligned HVX operations.
- Correctly synchronizes DMA transfers to prevent race conditions.
- Uses `FLASH_ATTN_BLOCK_SIZE` of 128 for efficient chunking.
* hexagon: use aligned mad_f16
* hexagon: flash_atten more aligned ops
* hexagon: optimize scale_f32 hvx helpers
* hexagon: unroll fa loops
* hexagon: remove unused set-rows log
* hexagon: flash_attn_ext add support for DMAing Q
- Update `op_flash_attn_ext` to include Q row size in scratchpad allocation.
- Pad Q row size to 128 bytes for alignment.
- Implement DMA transfer for Q tensor in `flash_attn_ext_f16_thread`.
- Update dot product computations to use VTCM-buffered Q data.
* hexagon: fix handling of NANs hvx dotproducts
* hexagon: cleanup spad allocation in flash-atten
* hexagon: improve fp16/fp32 matmul
- Introduced `vec_dot_f16_f16` and `vec_dot_f16_f16_rx2` kernels using efficient HVX dot product intrinsics.
- Added `quantize_fp32_f16` to copy/convert weights from DDR to VTCM
- Updated `op_matmul` to use the optimized path when VTCM capacity allows and broadcasting requirements are compatible.
- Implemented fallback logic to the original implementation for complex broadcasting scenarios.
* hexagon: fix HVX_ARCH check
* hexagon: matmul cleanup and fp16 fixes
Use aligned vec_dot_f16 for 2d matmuls and unaligned version for 4d.
* hexagon: fix fp16 x fp16 matmuls and some minor refactoring
* hexagon: add support for GET_ROWS f32 -> f32
Also optimize SET_ROWS threading a bit when we have just a few rows to process.
* hexagon: optimize set-rows threading
* hexagon: update adb/run-bench.sh to properly support experimental and verbose options
* hexagon: flash_atten use aligned vectors for dot products
* refactor: replace ggml_hexagon_mul_mat with template-based binary operation for improved flexibility
* refactor: replace ggml_hexagon_mul_mat_id with template-based binary operation for improved flexibility
* refactor: initialize buffer types and streamline dspqueue_buffers_init calls for clarity
* add comment
* refactor: remove redundant buffer checks in hexagon supported operations
* wip
* add missing include to fix weak symbol warning
* add ggml_hexagon_op_generic
* refactor: simplify tensor operation initialization and buffer management in hexagon implementation
* refactor: streamline hexagon operation initialization and buffer management
* refactor: update function signatures and streamline request handling in hexagon operations
* wip
* ggml-hexagon: clean up code formatting and improve unary operation handling
* wip
* rename
* fix: add support for permuted F16 tensors and enhance quantization checks in matrix operations
* refactor: replace ggml_hexagon_mul_mat with template-based binary operation for improved flexibility
refactor: replace ggml_hexagon_mul_mat_id with template-based binary operation for improved flexibility
refactor: initialize buffer types and streamline dspqueue_buffers_init calls for clarity
refactor: remove redundant buffer checks in hexagon supported operations
add missing include to fix weak symbol warning
add ggml_hexagon_op_generic
refactor: simplify tensor operation initialization and buffer management in hexagon implementation
refactor: streamline hexagon operation initialization and buffer management
refactor: update function signatures and streamline request handling in hexagon operations
ggml-hexagon: clean up code formatting and improve unary operation handling
fix: add support for permuted F16 tensors and enhance quantization checks in matrix operations
# Conflicts:
# ggml/src/ggml-hexagon/ggml-hexagon.cpp
* hexagon: fix merge conflicts
* hexagon: minor cleanup for buffer support checks
* hexagon: factor out op_desc and the overal op logging
* hexagon: further simplify and cleanup op dispatch logic
* snapdragon: update adb scripts to use llama-cli and llama-completion
* fix pipeline failure
---------
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
* feat: add run_mtmd script for hexagon
* fix: fix issue in fp16xfp32 mm
* fix: remove opt_experiment for fp16xfp32 mm
* fix: ggml-hexagon: matmul fp16xfp32 support non-contigious src0
* fix: fix syntax check for run-mtmd.sh for cli
* Faster tensors (#8)
Add fast matrix and matrix/vector multiplication.
* Use map for shader replacements instead of pair of strings
* Wasm (#9)
* webgpu : fix build on emscripten
* more debugging stuff
* test-backend-ops: force single thread on wasm
* fix single-thread case for init_tensor_uniform
* use jspi
* add pthread
* test: remember to set n_thread for cpu backend
* Add buffer label and enable dawn-specific toggles to turn off some checks
* Intermediate state
* Fast working f16/f32 vec4
* Working float fast mul mat
* Clean up naming of mul_mat to match logical model, start work on q mul_mat
* Setup for subgroup matrix mat mul
* Basic working subgroup matrix
* Working subgroup matrix tiling
* Handle weirder sg matrix sizes (but still % sg matrix size)
* Working start to gemv
* working f16 accumulation with shared memory staging
* Print out available subgroup matrix configurations
* Vectorize dst stores for sg matrix shader
* Gemv working scalar
* Minor set_rows optimization (#4)
* updated optimization, fixed errors
* non vectorized version now dispatches one thread per element
* Simplify
* Change logic for set_rows pipelines
---------
Co-authored-by: Neha Abbas <nehaabbas@macbookpro.lan>
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>
* Comment on dawn toggles
* Working subgroup matrix code for (semi)generic sizes
* Remove some comments
* Cleanup code
* Update dawn version and move to portable subgroup size
* Try to fix new dawn release
* Update subgroup size comment
* Only check for subgroup matrix configs if they are supported
* Add toggles for subgroup matrix/f16 support on nvidia+vulkan
* Make row/col naming consistent
* Refactor shared memory loading
* Move sg matrix stores to correct file
* Working q4_0
* Formatting
* Work with emscripten builds
* Fix test-backend-ops emscripten for f16/quantized types
* Use emscripten memory64 to support get_memory
* Add build flags and try ci
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
* Remove extra whitespace
* Move wasm single-thread logic out of test-backend-ops for cpu backend
* Disable multiple threads for emscripten single-thread builds in ggml_graph_plan
* Fix .gitignore
* Add memory64 option and remove unneeded macros for setting threads to 1
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>