* Add Q8_0 OpenCL kernel
Co-authored-by: yunjie <yunjie@qti.qualcomm.com>
* opencl: fix build for non-adreno
* opencl: refactor q8_0
* opencl: enforce subgroup size of 64 for adreno for q8_0
* For A750 and older generations, subgroup size can be 64 or 128.
This kernel assumes subgroup size 64.
* opencl: suppress warning when adreno kernels are disabled
---------
Co-authored-by: yunjie <yunjie@qti.qualcomm.com>
Co-authored-by: Li He <lih@qti.qualcomm.com>
* opencl: flatten `q6_K` and add `kernel_mul_mv_q6_K_f32_flat`
* opencl: clean up
* opencl: refactor q6_K mv - put loop body in `block_q_6_K_dot_y_flat`
* opencl: tweak the workgroup size a bit
* opencl: output 4 values per subgroup for `kernel_mul_mv_q6_K_f32_flat`
* opencl: proper alignment for q6_K
* opencl: boundary handling for flattened q6_K mv
* opencl: rename q6_K mv kernel file
* opencl: put flattened q6_K mv in its own file
* opencl: use lower k in file name
* opencl: use K in variable names
* opencl: add mm_q8_0_f32
* opencl: fix data loading for incomplete tile
* opencl: use q8_0 mm for larger matrix
* opencl: add some tests to cover the path
* implement set_rows with i32 index
* template fix
* test quantized path
warnings--
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* forgotten name change
* deduplicate cuda/sycl and test-fix
* indent++
* vulkan: support set_rows with i32 index type (#16162)
* disable i32 index for webgpu for now
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
- flatten mxfp4 and packed fp4->fp16 bit-wise convert function (replace lut)
- MoE kernel optimizations
---------
Co-authored-by: Li He <lih@qti.qualcomm.com>
* ggml : remove adding extra dim timestep embedding
This commit updates the ggml_timestep_embedding function to no longer
add an extra dimension when the specified dimension is odd.
The motivation for this change is that this introduces an unnecessary
dimension when the dimension is odd, which caused an issue in the
kernels which were not expecting this extra dimension and it resulted in
uninitialized memory for the second to last dimension.
* ggml-cuda : fix padding in timestep embedding kernel
This commit removes the zeroing out of the last dimension now that we
are not adding the extra padding dimension.
* ggml-metal : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel
* ggml-opencl : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.
* ggml-sycl : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.
* ggml-vulkan : fix padding in timestep embedding kernel
This commit fixes the zero padding for odd dimensions in
the timestep embedding kernel.
* ggml-cpu : fix padding in timestep embedding function
This commit removes the zeroing out of the last dimension now that we
are not adding the extra padding dimension.
* add F16/F16 fa support
* fix kernel init
* use mad instead of fma
* use inline function
* mark FA with sinks as unsupported for now
* add pragma unroll to loops
* ggml : add ggml_scale_bias
* ggml_vec_mad1_f32
* add more simd
* add CUDA
* sycl
* vulkan
* cann (placeholder)
* opencl
* will this fix cpu?
* fix cuda
* suggestions from coderabbit
* fix cann compile error
* vDSP_vsmsa
* rm __ARM_FEATURE_SVE
* use memcpy for op params
* make code looks more consistent
* use scalar for __ARM_FEATURE_SVE
* add x param to ggml_vec_mad1_f32