* rename
* Refactor vector operations in vec_op_impl and vec_dot_product_impl for improved clarity and performance
* wip
* Enhance vector copy functions for improved performance and clarity in vec_ops.hpp
* wip
* wip
* wip
* Optimize vector dot product implementations for enhanced performance and efficiency
* Enhance flash attention implementation and type traits for improved vector operations and alignment checks
# Conflicts:
# ggml/src/ggml-qnn/npu/device/type_traits.cpp
* remove align
* wip
* Enhance vector dot product implementation for improved performance by adding parallel processing for multiple vector pairs
* Revert "Enhance vector dot product implementation for improved performance by adding parallel processing for multiple vector pairs"
This reverts commit 78cc24ed2285002ca29d6189fa61ba4ce24f8d16.
* Enhance flash attention implementation with type checks for tensor data types and improved constexpr usage
* wip
* opt mask calc
* Revert "opt mask calc"
This reverts commit bb1840876692a11511d5ab7828b8a707402e30b9.
* wip
* opt mul mat caching logic to add dst cache
* Revert "opt mul mat caching logic to add dst cache"
This reverts commit ab442fa9f763b3873c929936e4cb739cb1c83850.
* wip
* Refactor matrix multiplication implementation to include vector conversion and performance tracking
* wip
* wip
* wip
* create vec_ops.inl for more aggressive compiler inline
* wip
* refactor vector dot product implementations for improved readability and performance
* refactor vector conversion functions to use HVX_Vector_Dual for improved clarity and consistency
* wip
* wip
* wip
* implement row size caching logic and enhance type traits for F32 support
* refactor matrix multiplication functions to improve caching logic and simplify tensor alignment handling
* add vector zeroing functions for F32 and F16 types to optimize memory initialization
* Revert "add vector zeroing functions for F32 and F16 types to optimize memory initialization"
This reverts commit e374326dc74d049e6603e393ade418d9ef2b83f3.
* wip
* refactor alignment checks in dot product function to handle null pointers
* wip
* refactor load_block_generic and related functions for improved alignment handling
* wip
* refactor flash attention implementation and introduce type-erased dot function for improved type handling
* refactor dot product implementations for improved loop handling and clarity
* refactor thread_pool constructor to pre-allocate VTCM cache for each thread
* Revert "refactor thread_pool constructor to pre-allocate VTCM cache for each thread"
This reverts commit 00cdd3fa88d909feef44ddaa42095274b7627685.
* wip
* opt interfaces for tensor cleanup
* refactor mul_mat_impl to use aligned size for src0 row calculation
* refactor: update dequantized_row_size logic and add size alignment checks for tensors
* wip
* wip
* refactor: replace raw pointer initialization with invalid handle constants for better clarity
* wip
* feat: add mixed precision dot product implementation and function declaration
* feat: implement mixed precision vector dot product and conversion functions
* fix: update data type handling in matrix multiplication implementation
* fix: adjust row count handling in matrix multiplication implementation for accurate slicing
* fix: optimize matrix multiplication implementation by unroll loop
* update performance tracking for matrix multiplication implementation
* add fetching
* wip
* fix: support F16 * F32 multiplication in is_mul_mat_supported function
* fix: improve src0 fetching logic in vec_dot_product_mixed_impl for better alignment handling
* fix test failure for row width 67
* try fix failed test
* fix: rename aligned_address to align_down for clarity in vector alignment handling
* wip
* qnn fix: update device capabilities for quantized types in qnn-lib to improve compatibility
* fix test failure at width == 193
* fix: replace zero vector initialization with previous vector in mixed dot product implementation
* wip
* fix: improve handling of last vector in mixed dot product implementation
* wip
* wip
* wip
* wip
* Enhance mul_mat_f32 function to support quantized types and improve static assertions
* rename
* Refactor dequantization functions to use npu_device_fp16_t and improve type handling
* Optimize dequantization in dequantize_row_q8_0 by replacing qf32 multiplication with qf16
* Optimize dequantization in dequantize_row_q4_0 by replacing qf32 multiplication with qf16
* Add hvx_vsf_convert_vhf function for improved vector conversion
* add perf logs
* Refactor dequantize_row_q4_0 for alignment
* Update logging in supports_op_impl and supports_op to use ggml_op_desc for better clarity
* Add support for ROPE operation in NPU capabilities and related functions
* Implement ROPE operation in tensor and op_rope, including cache initialization and correction dimension calculations
* enable ROPE by adding operation validation
* add support to freq is null case
* wip
* Refactor rope_f32 to improve indexing by introducing total_planes calculation
* reformat
* Refactor rope_f32 to optimize data access patterns by introducing row and plane pointers
* Add performance tracking to rope_f32 function for enhanced profiling
* Refactor rope_f32 to use a templated implementation
* Refactor rope_impl to replace loop with memcpy for improved performance
* Refactor mul_mat_impl to support quantization as a template parameter
* wip
* wip
* Refactor rope_impl to optimize plane indexing in the processing loop
* Add aligned vector dot product implementation for mixed precision types
* wip
* Enhance matrix multiplication for F32 and F16 types with alignment checks
* Optimize vec_dot_product_mix_aligned_impl for improved performance with additional vector sums
* Add alignment checks for matrix multiplication and vector dot products
* Refactor matrix multiplication to use function pointers for improved readability and maintainability
* Fix alignment check in is_dot_product_aligned to ensure correct vector size handling
* Remove unused f16_to_f32_table parameter from quantization and dequantization functions
* wip
* Add L2 fetch for src1 plane rows in matrix multiplication implementation
* wip
* Refactor hvx_vsf_convert_vhf to accept an additional parameter for flexibility in vector multiplication
* Refactor vec_dot_product_mix_aligned_impl to improve variable naming for clarity
* Refactor load_dual_block_generic and dequantize_row_q4_0 to improve performance
* Refactor vector operation functions to improve clarity and consistency in variable usage
* wip
* wip
* Refactor dequantize_row_q4_0_impl for improved clarity and performance in vector operations
* wip
* Update load_dual_block_generic to use intrinsics
* Refactor load_dual_block_generic and load_qual_block_generic for improved performance and clarity
* wip
* wip
* Optimize dequantize_row_q8_0 for improved performance by unrolling for loop
* wip
* wip
* fix typo
* add qurt_thread
* add thread pool
* add thread_pool obj at device ctx
* wip
* small refactoring to fit the thread pool structure
* set start/end threads for add
* init thread pool
* fix thread creation
* split complete and pending signals
* opt mulmat
* wip
* 2 threads
* back to 4 threads
* use barrier
* remove some unnecessary package
* add multi thread support for mul mat
* wip
* use qurt_barrier_t instead of qurt_signal_t
* wip
* wip
* add log
* split qnn cmake config
* create function to calculate the start and end func
* wip
* fix comment
* fix comment
* fix comment
* wip
* fix typo
* move op key generate function to kOpCaps
* fix op desc print
* try fix rms_norm
* Revert "try fix rms_norm"
This reverts commit 33b296098012909cb482fc29b52b28098dc971cd.
* add quantization type support by converting them to float
* enable quantization tensor for mulmat in gpu/npu
* fix asan error
* add log and assert
* insert output convert operator after mulmat
* add log
* fix some error in running
* disable permute again
* add log
* add error function
* Revert "add error function"
This reverts commit f92ff47798ac8053fb776c55efbb1a98469c7af1.
* add log
* more log
* disable convert op in graph
* wip
* add f16 config for graph
* set f16 precision for f16 graph
* fix override data type
* add comment
* add config flag to enable quantize type
* add log
* more quantized type for cpu and gpu backend
* enable all quant types for cpu and gpu backend
* rename
* wip
* add log
* remove unused functions
* skip permute
* remove get_qnn_op_input_param_count
* fallback to generic_get_op_desc if no op_desc
* revert 'skip permute'
* Revert "revert 'skip permute'"
This reverts commit 5761e31fd23c69c4cabf6fd9fac1a0d3e5a74968.
* wip
* add log
* print qnn tensor type
* add log
* limit the max size of tensor
* add log
* fix tensor size limiter
* small improve on tensor info printer
* disable sqrt and div to pass test-backend-ops for 8 gen 2
* remove debug log in release build
* add log
* skip permute in src
* wip
* disable reshape
* skip mul at decoder start
* wip
* add log
* add qnn_scoped_timer
* add perf tracker in graph
* add cmake options GGML_QNN_ENABLE_PERFORMANCE_TRACKING
* fix flag name
* use milli-second
* wip
* fix comment string
* add file for profiler
* change qnn-cpu to GGML_BACKEND_DEVICE_TYPE_ACCEL, so that we can run tests on cpu
* wip
* profiler: refactoring
* wip
* add implement for print_profile_events
* set-up profiler for graph
* set profiler to graph execute
* pretty print events
* unified log print prefix
* print event count
* enable optrace
* print duration at event end
* wip
* add more detailed soc information
* wip
* move device caps array into qnn-lib.cpp
* remove lib_name in device_context
* move get_graph_key_from_cgraph to graph.cpp
* add override type for tensor key
* use override_type instead of original data type for graph key
* append op type to tensor name to fix error in qwen
* remove todo
* wip
* debug
* disable reshape
* make sure single node op have same type
* fix warning at the logger
* Revert "disable reshape"
This reverts commit 5aeca4ba9bec6db3f047f9da803df20f9f6612b3.
* fix warning
* wip
* add todo for graph key generate
* rename some file to meet upstream guideline
* remove local .clang-format
* expend supported/unsupported counter to all ops
* append device name to log
* port to ggml logger
* fix warning after adapt to ggml logger
* append \n to all log
* use case op instead of convert
* Revert "use case op instead of convert"
This reverts commit e662fc2dfee41719aaf7bc9d75e03e8d0f7ded0f.
* fix op that needs same shape
* opt kQnnOpsTable
* refresh params name field when getting op config
* opt npu log print
* remove unused functions
* move qnn_instance function implementation into cpp
* wip
* wip
* move dl related function into separated file
* use cast op for gpu
* Revert "use cast op for gpu"
This reverts commit 05df7362a15c022d05940d682e84cf480a082c6a.
* Reapply "use cast op for gpu"
This reverts commit 2520e5922a216faceb6d7efcde23dafe6947a4b3.
* fix compiling error in win
* fix align_alloc in win
* fix compiling error
* add get sys free/total mem for win
* wip
* suppress warning in win
* add missing chrono header
* set the correct qnn lib name for windows
* add flag to control cpu backend
* wip
* wip
* Revert "Reapply "use cast op for gpu""
This reverts commit f56519c374a7d46faac706cf214de48ff5fc5139.
* fix compiling error for linux build
* fix cdsprpc dynamic library name
* wip
* skip rpc load fail
* fix page_align_alloc
* suppress some warning in gcc
* wip
* reuse align to function
* more log
* add log and fix warning
* wip
* fix asan errors and memory leaks
* fix the get_io_tensors_from_graph
* improve comment
* print GGML_QNN_DEFAULT_LIB_SEARCH_PATH
* revert some unused changes
* move library search path setter into qnn module
* fix android library loading
* skip qnn_device_get_platform_info for npu emulator
* more log
* split graph implementation into cpp file
* rename: ggml_qnn_graph -> qnn_graph
* add imput/output tensor to graph
* fix assert
* wip
* add _ggml_tensor field in qnn tensor
* add comments
* add set_data_buffer with raw memory buffer
* use set_data_buffer
* op param buffer use qnn_buffer_ptr
* add qnn_mem_buffer_slice
* use qnn_buffer_ptr as tensor buffer
* use new set_data_buffer to reduce copy
* ggml_qnn_op_config: add function to set input/output tensor before init node
* remove ggml_qnn_connectable_op_config and use ggml_qnn_single_op_config instead
* wip
* add initialize_op_nodes without tensor params
* wip
* add op caps table
* merge kGgmlOpToQnnOp and kOpCaps tables
* wip
* add cache parameter to create_tensors
* add init_from_ggml_graph
* disable gelu for all backend
* wip
* move op index calc to op config module
* use the ggml_tensor as parameter of build_graph
* add log
* use create_operation_from_op_tensor in old build_graph function
* remove unused constructors
* fix parameter count
* remove unused member func/var
* make init_from_ggml_graph as a class member: build_graph_from_ggml_graph
* move graph finalize into member function `finalize()`
* get graph key from ggml op tensor directly
* append output type
* reduce tensor key length
* add function to generate key from ggml_cgraph
* simplify graph cache insert and delete
* remove template param at get_qnn_graph_from_cache
* wip
* merge kQnnUnaryOpsTable and kQnnBinaryOpsTable
* refactor device_supports_op
* add log
* wip
* use framework function to check same shape
* wip
* extract some logic into separated function
* wip
* add execution function that runs graph
* add function to create qnn graph from ggml_cgraph with cache
* execute graph directly
* return null graph key for empty graph
* add more qualcomm chipset enums
* add cap for reshape
* disable some ops
* try to skip GGML_OP_VIEW
* moew log for view tensor
* append param tensor into intermedia tensor key
* use 'ordered' set
* fix warning in release
* wip
* remove unused functions
* wip
* init from last devices
* move init into constructor
* wip
* add static assert to device table
* make kDeviceCaps as constexpr
* get free memory and total memory
* add optimize flag for qnn backend
* redo: add convert nodes
This reverts commit 8448acd5ebf8fe86ab9d25313b64a15c811ef96e.
* align clang format with cann
* rename binary_op -> general_op
casue there're some op that will only tak 1 param
* Revert "rename binary_op -> general_op"
This reverts commit 5be63b1a0dc4614457785367dade62158fe46214.
* wip
* add GGML_OP_PERMUTE
* add GGML_OP_VIEW and GGML_OP_GET_ROWS
* wip
* Revert "wip"
This reverts commit 772462ca6cfa01ea31bde725c2da60076ad9385f.
* ggml_qnn_op_config now manager the construction of ggml_qnn_tensor
* wip
* add interface ggml_qnn_op_config
* add ggml_qnn_list_op_config
* add create_tensor and move tensor bind to execute
* wip
* rename: ggml_qnn_list_op_config -> ggml_qnn_matmul_op_config
* add tensortype to allow native tensor
* remove ggml_tensor param at ggml_qnn_tensor::create_tensor
* postpone the tensor id allocation to add_node
* add ggml_qnn_op_config_base
* trival change to reduct the param of function
* split bind_tensors into bind_input_tensors and bind_output_tensors
* implement ggml_qnn_single_op_config::create_tensors
next will set the prameter of transpose
* tensor: add bind buffer
* add parameter tensor type
* implement add_tensor_param
* set qnn_instance only at constructor
* set transpose tensor param
* move create_op_constructor into op-config module
* create QNN_OP_MAT_MUL from ggml_qnn_matmul_op_config
* try fix crash
* fix compiling error at older ndk (r23c)
* fix crash
* fix parameter tensor name
* update tensor dimension assignment and add TODO
* fix mat_mul graph creating
* fix MUL_MAT_256x16x10x1_256x1x10x1_16x1x10x1
* append type to graph cache key
* wip
* fix supported op
* update comment
* disable op other than add and mat_mul
* add convert op to adapt multi input/output format
* disable f16 for cpu backend according to official doc
https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/cpu_backend.html#supported-operations
* add supported data types flags in each backend
* remove unused functions
* append output type to graph key
* fix gpu backend by disable the different data type op
* fix cpu backend support ops
* fix duplicated tensor name
* append op name
* suppress warning
* remove unused code