This commit adds support for MFMA instructions to MMQ. CDNA1/GFX908 CDNA2/GFX90a and CDNA3/GFX942 are supported by the MFMA-enabled code path added by this commit. The code path and stream-k is only enabled on CDNA3 for now as it fails to outperform blas in all cases on the other devices.
Blas is currently only consistently outperformed on CDNA3 due to issues in the amd-provided blas libraries.
This commit also improves the awareness of MMQ towards different warp sizes and as a side effect improves the performance of all quant formats besides q4_0 and q4_1, which regress slightly, on GCN gpus.
* feat: Add s_off as a parameter in the args struct
This may not be necessary, but it more closely mirrors the CUDA kernel
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* perf: Parallelize mamba2 SSM_SCAN metal kernel over d_state
This is a first attempt at optimizing the metal kernel. The changes here
are:
- Launch the kernel with a thread group of size d_state
- Use simd groups and shared memory to do the summation for the y
computation
When tested with G4 tiny preview, this shows roughly a 3x speedup on
prefill and 15% speedup on decode.
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Update logic to correctly do the multi-layer parallel sum
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Correctly size the shared memory bufer and assert expected size relationships
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Compute block offsets once rather than once per token
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Use local variable for state recursion
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Use a secondary simd_sum instead of a for loop
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add assertion and comment about relationship between simd size and num simd groups
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Parallelize of d_state for mamba-1
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Parallel sum in SSM_CONV
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* Revert "feat: Parallel sum in SSM_CONV"
After discussion with @compilade, the size of the parallelism here is
not worth the cost in complexity or overhead of the parallel for.
https://github.com/ggml-org/llama.cpp/pull/14743#discussion_r2223395357
This reverts commit 16bc059660.
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Simplify shared memory sizing
Branch: GraniteFourPerf
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-Authored-By: Georgi Gerganov <ggerganov@gmail.com>
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Neither "g" nor "x" are valid portPos specifiers per the official
[graphviz documents](https://graphviz.org/docs/attr-types/portPos/):
> If a compass point is used, it must have the form "n","ne","e","se","s","sw","w","nw","c","_".
I tested locally for it to fall back to default portPos specifier if an
invalid portPos is specified. As a consequence, we can remove associated
code.
* musa: apply mublas API changes
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: update musa version to 4.2.0
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: restore MUSA graph settings in CMakeLists.txt
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: disable mudnnMemcpyAsync by default
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: switch back to non-mudnn images
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* minor changes
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: restore rc in docker image tag
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* CMake config: Create target only once
Fix error on repeated find_package(ggml).
For simplicity, check only for the top-level ggml::ggml.
* CMake config: Add CUDA link libs
* CMake config: Add OpenCL link libs
* CMake config: Use canonical find_dependency
Use set and append to control link lib variables.
Apply more $<LINK_ONLY...>.
* CMake config: Wire OpenMP dependency
This commit removes the inclusion of `<cstdlib>`.
The motivation for this change is that this source file does not seem to
use any functions from this header and the comment about `qsort` is a
little misleading/confusing.
* weight format to nz for 310p
* remove quant weight format to nz
* clean code
* fix
* make the conditions for converting weights to NZ format consistent
* clean code
* rename
* Refactor vector operations in vec_op_impl and vec_dot_product_impl for improved clarity and performance
* wip
* Enhance vector copy functions for improved performance and clarity in vec_ops.hpp
* wip
* wip
* wip
* Optimize vector dot product implementations for enhanced performance and efficiency
* Enhance flash attention implementation and type traits for improved vector operations and alignment checks
# Conflicts:
# ggml/src/ggml-qnn/npu/device/type_traits.cpp
* remove align
* wip
* Enhance vector dot product implementation for improved performance by adding parallel processing for multiple vector pairs
* Revert "Enhance vector dot product implementation for improved performance by adding parallel processing for multiple vector pairs"
This reverts commit 78cc24ed2285002ca29d6189fa61ba4ce24f8d16.
* Enhance flash attention implementation with type checks for tensor data types and improved constexpr usage
* wip
* opt mask calc
* Revert "opt mask calc"
This reverts commit bb1840876692a11511d5ab7828b8a707402e30b9.
* wip
* opt mul mat caching logic to add dst cache
* Revert "opt mul mat caching logic to add dst cache"
This reverts commit ab442fa9f763b3873c929936e4cb739cb1c83850.
* wip
* Refactor matrix multiplication implementation to include vector conversion and performance tracking
* wip
* wip
* wip
* create vec_ops.inl for more aggressive compiler inline
* wip
* refactor vector dot product implementations for improved readability and performance
* refactor vector conversion functions to use HVX_Vector_Dual for improved clarity and consistency
* wip
* wip
* wip
* implement row size caching logic and enhance type traits for F32 support
* refactor matrix multiplication functions to improve caching logic and simplify tensor alignment handling
* add vector zeroing functions for F32 and F16 types to optimize memory initialization
* Revert "add vector zeroing functions for F32 and F16 types to optimize memory initialization"
This reverts commit e374326dc74d049e6603e393ade418d9ef2b83f3.
* wip
* refactor alignment checks in dot product function to handle null pointers
* wip
* refactor load_block_generic and related functions for improved alignment handling
* wip
* refactor flash attention implementation and introduce type-erased dot function for improved type handling
* refactor dot product implementations for improved loop handling and clarity
* refactor thread_pool constructor to pre-allocate VTCM cache for each thread
* Revert "refactor thread_pool constructor to pre-allocate VTCM cache for each thread"
This reverts commit 00cdd3fa88d909feef44ddaa42095274b7627685.
* wip
* opt interfaces for tensor cleanup
* refactor mul_mat_impl to use aligned size for src0 row calculation
* refactor: update dequantized_row_size logic and add size alignment checks for tensors
* wip
* wip
* refactor: replace raw pointer initialization with invalid handle constants for better clarity
* wip
* ggml/ggml-vulkan/test-backend-ops: adds CONV_2D for Vulkan
* ggml-vulkan: adds f32 scalar shader to compute 2D convolution directly
with gemm (no need for im2col),
* test-backend-ops: adds test_case_ref to check the validity/performance of ops
against reference implementations having different graphs, adds tests
* * Performance fixes: minimized branch divergence, uses collectives to
eliminate redundant calculation, macros removed.
* Kernel shared memory size check
* Updates test-backend-ops to support graphs for performance
measurement.
* * Apple/Win32 compile errors fixed
* Subgroup size used to determine tile size -> fixes llvmpipe errors.
* Collectives disabled by default.
* Intel support is disabled as the performance is poor.
* Conv2d enabled for Intel with disabled collectives, disabled for Apple
* test-backend-ops modifications are reverted
* Trailing spaces and missing override fixed.
* Triggering pipeline relaunch.
* Code formatted with .clang-format.
* Fix Gemma3n not executed as CUDA_GRAPH on NVGPUs
Gemma3n uses Matrix-Matrix addition as part of their input processing,
wrongly triggering CUDA_GRAPH disablement on NVGPUs even when batch-size
of 1 is used.
* Exclude `project_per_layer_input` by matching node names
This ensures that all other graphs which don't exhibit this pattern do
not have their behavior changed.
* Revert unnecessary formatting changes
* Minimal setup of webgpu backend with dawn. Just prints out the adapter and segfaults
* Initialize webgpu device
* Making progress on setting up the backend
* Finish more boilerplate/utility functions
* Organize file and work on alloc buffer
* Add webgpu_context to prepare for actually running some shaders
* Work on memset and add shader loading
* Work on memset polyfill
* Implement set_tensor as webgpu WriteBuffer, remove host_buffer stubs since webgpu doesn't support it
* Implement get_tensor and buffer_clear
* Finish rest of setup
* Start work on compute graph
* Basic mat mul working
* Work on emscripten build
* Basic WebGPU backend instructions
* Use EMSCRIPTEN flag
* Work on passing ci, implement 4d tensor multiplication
* Pass thread safety test
* Implement permuting for mul_mat and cpy
* minor cleanups
* Address feedback
* Remove division by type size in cpy op
* Fix formatting and add github action workflows for vulkan and metal (m-series) webgpu backends
* Fix name
* Fix macos dawn prefix path
Remove un-necessary templates from class definition and packing functions
Reduce deeply nested conditionals, if-else switching in mnapck function
Replace repetitive code with inline functions in Packing functions
2 ~ 7% improvement in Q8 Model
15 ~ 50% improvement in Q4 Model
Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
* CUDA: add set rows for f32 and f16
* Review: change kernel params, use strides from host
* Use 1-d kernel
* Review: use int64_t for blockDim.x, rename nb->s for clarity
* vulkan: support SET_ROWS
Add variants of the copy_to_quant shader that do the SET_ROWS operation.
Change these shaders to spread the work across the workgroup.
The memory access pattern is probably not great (one thread per quant block),
but should be fine for now.
* vulkan: optimize set_rows
Larger workgroups for non-quant types.
Set "norepeat" (there is manual repeat logic).
Use fastmod.
* vulkan: allow unclamped loads in coopmat2 mul_mat_id shader
* vulkan: increase coopmat2 mul_mat_id tile size
* vulkan: optimize mat_mul_id row_ids search to batch loads, and port to coopmat1 path
* vulkan: use smaller FA row size when head size is large. applies to both scalar and CM2 paths (CM1 isn't used due to shared memory limits)
* feat: add mixed precision dot product implementation and function declaration
* feat: implement mixed precision vector dot product and conversion functions
* fix: update data type handling in matrix multiplication implementation
* fix: adjust row count handling in matrix multiplication implementation for accurate slicing
* fix: optimize matrix multiplication implementation by unroll loop
* update performance tracking for matrix multiplication implementation
* add fetching
* wip
* fix: support F16 * F32 multiplication in is_mul_mat_supported function
* fix: improve src0 fetching logic in vec_dot_product_mixed_impl for better alignment handling
* fix test failure for row width 67
* try fix failed test
* fix: rename aligned_address to align_down for clarity in vector alignment handling
* wip
* qnn fix: update device capabilities for quantized types in qnn-lib to improve compatibility
* fix test failure at width == 193
* fix: replace zero vector initialization with previous vector in mixed dot product implementation
* wip
* fix: improve handling of last vector in mixed dot product implementation
* wip
* wip
* wip
* wip
* Enhance mul_mat_f32 function to support quantized types and improve static assertions
* rename
* Refactor dequantization functions to use npu_device_fp16_t and improve type handling
* Optimize dequantization in dequantize_row_q8_0 by replacing qf32 multiplication with qf16
* Optimize dequantization in dequantize_row_q4_0 by replacing qf32 multiplication with qf16
* Add hvx_vsf_convert_vhf function for improved vector conversion
* add perf logs
* Refactor dequantize_row_q4_0 for alignment
* Update logging in supports_op_impl and supports_op to use ggml_op_desc for better clarity
* Add support for ROPE operation in NPU capabilities and related functions
* Implement ROPE operation in tensor and op_rope, including cache initialization and correction dimension calculations
* enable ROPE by adding operation validation
* add support to freq is null case
* wip
* Refactor rope_f32 to improve indexing by introducing total_planes calculation
* reformat
* Refactor rope_f32 to optimize data access patterns by introducing row and plane pointers
* Add performance tracking to rope_f32 function for enhanced profiling
* Refactor rope_f32 to use a templated implementation
* Refactor rope_impl to replace loop with memcpy for improved performance
* Refactor mul_mat_impl to support quantization as a template parameter
* wip
* wip
* Refactor rope_impl to optimize plane indexing in the processing loop
* Add aligned vector dot product implementation for mixed precision types
* wip
* Enhance matrix multiplication for F32 and F16 types with alignment checks
* Optimize vec_dot_product_mix_aligned_impl for improved performance with additional vector sums
* Add alignment checks for matrix multiplication and vector dot products
* Refactor matrix multiplication to use function pointers for improved readability and maintainability
* Fix alignment check in is_dot_product_aligned to ensure correct vector size handling
* Remove unused f16_to_f32_table parameter from quantization and dequantization functions
* wip
* Add L2 fetch for src1 plane rows in matrix multiplication implementation
* wip
* Refactor hvx_vsf_convert_vhf to accept an additional parameter for flexibility in vector multiplication
* Refactor vec_dot_product_mix_aligned_impl to improve variable naming for clarity
* Refactor load_dual_block_generic and dequantize_row_q4_0 to improve performance
* Refactor vector operation functions to improve clarity and consistency in variable usage
* wip
* wip
* Refactor dequantize_row_q4_0_impl for improved clarity and performance in vector operations
* wip
* Update load_dual_block_generic to use intrinsics
* Refactor load_dual_block_generic and load_qual_block_generic for improved performance and clarity
* wip
* wip
* Optimize dequantize_row_q8_0 for improved performance by unrolling for loop
* wip
* wip
* fix typo
* ggml : add ggml_scale_bias
* ggml_vec_mad1_f32
* add more simd
* add CUDA
* sycl
* vulkan
* cann (placeholder)
* opencl
* will this fix cpu?
* fix cuda
* suggestions from coderabbit
* fix cann compile error
* vDSP_vsmsa
* rm __ARM_FEATURE_SVE
* use memcpy for op params
* make code looks more consistent
* use scalar for __ARM_FEATURE_SVE
* add x param to ggml_vec_mad1_f32
* vulkan: allow FA split_k with smaller KV values
* vulkan: spread split_k_reduce work across more threads
k_num can get rather large. Use the whole workgroup to reduce the M/L values.
Launch a thread for each element in the HSV dimension of the output. Helps a
lot for large HSV (like deepseek).
The fused operation was grabbing the epsilon value from the wrong place.
Add an env var to disable fusion.
Add some missing checks for supported shapes/types.
Handle fused rms_norm+mul in check_results.
* vulkan: Handle updated FA dim2/3 definition
Pack mask boolean and n_head_log2 into a single dword to keep the push
constant block under the 128B limit.
* handle null mask for gqa
* allow gqa with dim3>1
* kv-cache : use ggml_set_rows
ggml-ci
* graph : separate k and v indices
ggml-ci
* cont : remove redundant ifs
ggml-ci
* kv-cache : improve find_slot impl
* kv-cache : bounds-check when accessing slot_info indices
* kv-cache : add comments
ggml-ci
* ggml : add TODOs for adding GGML_OP_SET_ROWS support in the backends
ggml-ci
* llama : initial Mamba-2 support
* ggml : SIMD ggml_ssm_scan for Mamba-2
* ggml : improve ggml_mul speed when masking recurrent states
* llama : support running Mamba-Codestral-7B-v0.1
* llama : fix Mamba-2 conv state saving
* ggml : make the ggml_mul fast broadcast path more consistently formatted
* llama : remove unused variable
* llama : add missing break
* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present
The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.
* llama : avoid redundant state copy for Mamba 1 and 2
* metal : attempt to adapt SSM_SCAN for Mamba-2
* metal : fix SSM_SCAN pipeline scope
* metal : use log and exp instead of log1pf and expf in SSM_SCAN
* metal : remove unused arguments for SSM_SCAN
The max index is 31, so trimming the arguments is necessary.
* metal : add back n_seqs to SSM_SCAN args
Whoops, this is needed for the offset in the concatenated output.
* metal : fix SSM_SCAN state head offset
* metal : fix wrong number of tokens per sequence in SSM_SCAN
* ggml : remove unused fast broadcast path in GGML_MUL
This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.
* ggml : avoid multiply by D in GGML_OP_SSM_SCAN
This makes the weight buft detection in src/llama.cpp simpler.
* convert : transpose Mamba-2 A, D and reshape SSM_NORM
This breaks existing conversions of Mamba-2 models
to avoid some reshapes.
Not sure if it's a good idea,
but it makes the graph slightly cleaner.
* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks
* convert : fix flake8 lint
* metal : fix confusion between ; and ,
* metal : add missing args for nb references in ssm_scan_f32_group
* metal : single-user mamba2 inference works
* kv-cache : remove const_cast when setting inputs for s_copy
And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.
* convert : avoid AutoConfig for Mamba and Mamba2 hparams
* kv-cache : allow context shift for recurrent models
* graph : fix recurrent state copies when avoiding copies
Works, but using lambda functions might not be that clean.
* ggml : fix mamba2 ssm scan when compiled with SVE
* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches
* cuda : implement ssm scan for Mamba2
There is still room for improvement, but it works!
* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2
* mamba : fix mismatched new and delete size for llm_build_mamba
Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON
* cuda : graceful fallback for Mamba-1 models with weird embd size