* Add power management utilities to NPU device context and update DCVS settings
* Update DCVS settings in power_utils to use v3 API and enhance power management
* wip
* Enhance dequantization functions by adding load_dequant_table support and updating signatures for improved performance
* use lut
* wip
* fix test failure
* wip
* Refactor load_qual_block_generic to improve block handling and optimize vector operations
* Enhance load_dual_block_generic and load_qual_block_generic to accept a mask parameter for improved block handling
* Refactor flash_attn_impl to optimize mask l2 prefetch
* wip
* wip
* wip
* wip
* add log
* link against shared libraries instead of static ones
* fix swiglu
* wip
* refactor expf_fix to handle overflow for different data types
* enhance is_glu_op_supported to validate shapes for multiple sources
* wip
* refactor logging macros to use hexagon namespace and improve formatting
* fix printf format error
* wip
* refactor: update static_assert messages for block size validation and add HVX_VectorPred_x3 type alias
* rename
* feat: enhance fa with mask
* wip
* wip
* refactor: replace instances of Q6_V_vzero() with kZeroV for consistency
* wip
* wip
* wip
* fix: improve address alignment check in HVX_Vector handling
* refactor: streamline vector dot product implementations for improved readability
* refactor: q4k add hvx intrinsic impl
* refactor: enhance dequantize_row_q4_K for clarity and performance
* refactor: optimize scale mask usage in dequantization functions for improved performance
* refactor: optimize dequantize_row_q4_K for intrinsic usage and performance improvements
* refactor: move GLU operation implementation into separated file
* sync after swiglu
* wip
* wip
* wip
* feat: increase prc main thread stack size
* fix: replace hardcoded stack size with NPU_THREAD_STACK_SIZE constant
* wip
* feat: add optimized vector operations for exponential and division with overflow handling
* wip
* feat: refactor exponential function to handle overflow and underflow with improved logic
* wip
* wip
* feat: add vector loading and scaling functions for improved performance in block processing
* wip
* feat: optimize block loading by refactoring scale index handling for improved performance
* use Q6_Vb_vlut32_VbVbR_nomatch instead
* feat: enhance scale loading by adding static assertion and restructuring block handling
* wip
* feat: refactor vec_dot_product_mixed_impl for improved clarity and performance
* wip
* feat: simplify vector loading functions and improve alignment handling
* wip
* feat: enhance scale loading mask with quantization block size validation
* wip
* feat: implement make_scale_load_mask function and refactor vector handling in vec_ops
* feat: enhance load_dual_block_generic to include scale indices for improved vector loading
* revert q8 dequant
* wip
* feat: optimize dequantization functions by removing unnecessary masking and updating lookup methods
* wip
* wip
* add qurt_mutex
* Add DMA transfer class and integrate into thread pool
* Enhance DMA transfer functionality by adding support for multiple descriptors and initiating transfers in parallel
* fix dma crash
* fix failed unit tests
* wip
* use alignas
* Improve DMA transfer error handling and update descriptor completion check
* Fix VTCM cache size calculation in element-wise operations
* Add cache clean operations before DMA transfers in element-wise operations
* reduce cache clean operations
* Refactor DMA transfer functions to support 1D operations and rename for clarity
* Enhance DMA transfer functionality by adding 2D submission support and improving descriptor initialization
* Update read buffer method to support forced invalidation and remove unnecessary invalidation calls in element-wise operations
* wip
* Improve DMA transfer handling in mul_mat_gemv_impl by replacing memcpy with initiate_dma_row_transfer and adding wait_for_dma logic
* fix 2d dma
* feat: add DMA plane cache
* rename
* wip
* use memcpy for debug
* fix cache plane calc
* refactor: remove debug logging from mul_mat_impl and optimize cache handling
* rename
* fix 2d dma type
* refactor: enhance DMA transfer handling in mul_mat_gemv_impl and wait functions
* refactor: optimize DMA transfer handling in mul_mat_gemv_impl and wait functions
* wip
* wip
* move op impl into sub dir
* add log
* fix: correct pointer usage in mul_mat_gemv_impl for next plane access
* fix: improve DMA transfer error handling in mul_mat_impl and mul_mat_gemv_impl
* fix: fix crash by using the entire row bytes
* wip
* wip
* fix: prevent parallelization for scalar src1 in is_mul_mat_supported
* fix: add dimension checks for 2D DMA transfers and fallback to 1D if necessary
* wip
* fix: enable thread barrier for mul multiplication operations
* feat: add synchronization checks for tensor operations and update related functions
* wip
* fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations
* Revert "fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations"
This reverts commit af3441e67e706b2e5122369dc160353796867dd3.
* wip
* wip
* add comment
* fix: improve DMA transfer handling in mul_mat_gemv_impl for quantized source tensors
* add log
* try fix mulmat gemv
* wip
* fix: enhance DMA transfer handling in mul_mat_gemv_impl for quantized source tensors
* fix: optimize cache offset calculation and remove redundant swap in mul_mat_gemv_impl
* fix: refactor DMA transfer handling in mul_mat_gemv_impl for improved clarity and maintainability
* wip
* wip
* wip
* fix: enhance mul_mat_impl for improved cache handling and clarity
* fix: refactor tensor unflattening and DMA transfer initialization for improved clarity and type safety
* fix: improve cache handling of quant
* wip
* fix: improve cache handling in mul_mat_impl and mul_mat_gemv_impl for better memory efficiency
* rename
* add load_hexa_block_generic
* wip
* extract dequant block into separated function
* refactor: enhance dequantization functions with table parameter
* fix load_dual_block_generic
* refactor: rename dequantization functions for clarity and enhance block handling
* refactor: simplify dequantization logic by consolidating block handling and removing unused parameters
* wip
* wip
* feat: add make_qs_load_mask function and update load_dual_block_generic to use qs_indices
* fix load_dual_block_generic
* refactor: update load functions to use qs_indices for improved block loading
* wip
* fix: update loop indices and boundary checks to use size_t for better efficiency
* wip
* update make_scale_load_mask, to make it available for q8
* feat: add vec_dot_product_quant_impl for quantized dot product computation
* refactoring: move come quant func to dedicated file
* refactor: rename dequantization functions for clarity and consistency
* wip
* feat: enhance vec_dot_product_quant_impl with dual dequantization and improved assertions
* add vec_dot_product_vqf32_q40_f32
* wip
* wip
* wip
* wip
* implement vec_mpy_qf32_qf32_qf32 function and update vec_dot_product_vqf32_q40_f32 to use it
* wip
* add src0_plane_write_cache_offset
* wip
* enhance mul_mat_f32 to handle NPU_DATA_TYPE_Q4_0 for quantized matrix multiplication
* wip
* wip
* update test func
* refactor mul_mat_gemv_quant_impl to use get_nb for row stride and remove unused test function in init_f16_f32_table
* wip
* Add support for 4-block dequantization in vec_quant and update dot product implementation
* Refactor vec_dot_product_quant_impl to improve variable handling and enhance readability
* Refactor vec_dot_product_quant_impl to replace template function with inline vector operations
* use Q6_Vqf32_vmpy_VsfVsf instead of Q6_Vqf32_vmpy_Vqf32Vqf32
* Revert "use Q6_Vqf32_vmpy_VsfVsf instead of Q6_Vqf32_vmpy_Vqf32Vqf32"
This reverts commit 54839166fddbe40a0392adee5863c59070ccdbe4.
* wip
* improve log print in graph
* Refactor batched_row_dot to accept additional arguments and remove batched_row_dot_with_table
* Refactor synchronization functions to include previous operation and NE type parameters
* Refactor synchronization checks in several operations
* Update synchronization checks to include NPU_OP_COUNT in required conditions
* Add performance tracking to buffer management functions
* add memset
* add log
* fix: update backend device type from ACCEL to IGPU
* fix comment
* Add power management utilities to NPU device context and update DCVS settings
* Update DCVS settings in power_utils to use v3 API and enhance power management
* wip
* Enhance dequantization functions by adding load_dequant_table support and updating signatures for improved performance
* use lut
* wip
* fix test failure
* wip
* Refactor load_qual_block_generic to improve block handling and optimize vector operations
* Enhance load_dual_block_generic and load_qual_block_generic to accept a mask parameter for improved block handling
* Refactor flash_attn_impl to optimize mask l2 prefetch
* wip
* wip
* wip
* wip
* add log
* link against shared libraries instead of static ones
* fix swiglu
* wip
* refactor expf_fix to handle overflow for different data types
* enhance is_glu_op_supported to validate shapes for multiple sources
* wip
* refactor logging macros to use hexagon namespace and improve formatting
* fix printf format error
* wip
* refactor: update static_assert messages for block size validation and add HVX_VectorPred_x3 type alias
* rename
* feat: enhance fa with mask
* wip
* wip
* refactor: replace instances of Q6_V_vzero() with kZeroV for consistency
* wip
* wip
* wip
* fix: improve address alignment check in HVX_Vector handling
* refactor: streamline vector dot product implementations for improved readability
* refactor: q4k add hvx intrinsic impl
* refactor: enhance dequantize_row_q4_K for clarity and performance
* refactor: optimize scale mask usage in dequantization functions for improved performance
* refactor: optimize dequantize_row_q4_K for intrinsic usage and performance improvements
* refactor: move GLU operation implementation into separated file
* sync after swiglu
* wip
* wip
* wip
* feat: increase prc main thread stack size
* fix: replace hardcoded stack size with NPU_THREAD_STACK_SIZE constant
* wip
* feat: add optimized vector operations for exponential and division with overflow handling
* wip
* feat: refactor exponential function to handle overflow and underflow with improved logic
* wip
* wip
* feat: add vector loading and scaling functions for improved performance in block processing
* wip
* feat: optimize block loading by refactoring scale index handling for improved performance
* use Q6_Vb_vlut32_VbVbR_nomatch instead
* feat: enhance scale loading by adding static assertion and restructuring block handling
* wip
* feat: refactor vec_dot_product_mixed_impl for improved clarity and performance
* wip
* feat: simplify vector loading functions and improve alignment handling
* wip
* feat: enhance scale loading mask with quantization block size validation
* wip
* feat: implement make_scale_load_mask function and refactor vector handling in vec_ops
* feat: enhance load_dual_block_generic to include scale indices for improved vector loading
* revert q8 dequant
* wip
* feat: optimize dequantization functions by removing unnecessary masking and updating lookup methods
* wip
* wip
* add qurt_mutex
* Add DMA transfer class and integrate into thread pool
* Enhance DMA transfer functionality by adding support for multiple descriptors and initiating transfers in parallel
* fix dma crash
* fix failed unit tests
* wip
* use alignas
* Improve DMA transfer error handling and update descriptor completion check
* Fix VTCM cache size calculation in element-wise operations
* Add cache clean operations before DMA transfers in element-wise operations
* reduce cache clean operations
* Refactor DMA transfer functions to support 1D operations and rename for clarity
* Enhance DMA transfer functionality by adding 2D submission support and improving descriptor initialization
* Update read buffer method to support forced invalidation and remove unnecessary invalidation calls in element-wise operations
* wip
* Improve DMA transfer handling in mul_mat_gemv_impl by replacing memcpy with initiate_dma_row_transfer and adding wait_for_dma logic
* fix 2d dma
* feat: add DMA plane cache
* rename
* wip
* use memcpy for debug
* fix cache plane calc
* refactor: remove debug logging from mul_mat_impl and optimize cache handling
* rename
* fix 2d dma type
* refactor: enhance DMA transfer handling in mul_mat_gemv_impl and wait functions
* refactor: optimize DMA transfer handling in mul_mat_gemv_impl and wait functions
* wip
* wip
* move op impl into sub dir
* add log
* fix: correct pointer usage in mul_mat_gemv_impl for next plane access
* fix: improve DMA transfer error handling in mul_mat_impl and mul_mat_gemv_impl
* fix: fix crash by using the entire row bytes
* wip
* wip
* fix: prevent parallelization for scalar src1 in is_mul_mat_supported
* fix: add dimension checks for 2D DMA transfers and fallback to 1D if necessary
* wip
* fix: enable thread barrier for mul multiplication operations
* feat: add synchronization checks for tensor operations and update related functions
* wip
* fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations
* Revert "fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations"
This reverts commit af3441e67e706b2e5122369dc160353796867dd3.
* wip
* wip
* add comment
* fix: improve DMA transfer handling in mul_mat_gemv_impl for quantized source tensors
* add log
* try fix mulmat gemv
* wip
* fix: enhance DMA transfer handling in mul_mat_gemv_impl for quantized source tensors
* fix: optimize cache offset calculation and remove redundant swap in mul_mat_gemv_impl
* fix: refactor DMA transfer handling in mul_mat_gemv_impl for improved clarity and maintainability
* wip
* wip
* wip
* fix: enhance mul_mat_impl for improved cache handling and clarity
* fix: refactor tensor unflattening and DMA transfer initialization for improved clarity and type safety
* fix: improve cache handling of quant
* wip
* fix: improve cache handling in mul_mat_impl and mul_mat_gemv_impl for better memory efficiency
* rename
* add load_hexa_block_generic
* wip
* extract dequant block into separated function
* refactor: enhance dequantization functions with table parameter
* fix load_dual_block_generic
* refactor: rename dequantization functions for clarity and enhance block handling
* refactor: simplify dequantization logic by consolidating block handling and removing unused parameters
* wip
* wip
* rpc : add support for multiple devices
Allow rpc-server to expose multiple devices from a single endpoint.
Change RPC protocol to include device identifier where needed.
closes: #15210
* fixes
* use ggml_backend_reg_t
* address review comments
* fix llama-bench backend report
* address review comments, change device naming
* fix cmd order
* vulkan (DRAFT): split shader generation by GLSL source file, to improve incremental build times
* support dep-files so shaders are recompiled if their included files change
* rename shader files which are used as "headers" to use .glsl extension
* move glslc extension detection shaders to separate folders
* the above is to prevent them from getting glob'd with the actual compute shaders that need to be compiled
* vulkan : only write embedded shader .hpp/.cpp when they change
* avoid recompiling ggml-vulkan.cpp when editing shaders
* pass single --source argument instead of --input-dir & --filter to shader gen
* check for source file match earlier
* fix hang in vulkan-shaders-gen when there are compilation errors
* early out did not decrement compile_count
* clean up
* fix glslc integer dot product test
* unconditionally write the embedded shader cpp output
* replace output filepath in generated dep-files to match output in CMakeLists
---------
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
* vulkan: Replace uses of maxMemoryAllocationSize and VK_WHOLE_SIZE
Replace maxMemoryAllocationSize check with maxBufferSize when creating buffers.
The maxMemoryAllocationSize limit is a "soft" limit and allocations can succeed
beyond that limit. This allows > 4GB buffers to be allocated on some
implementations (e.g. NVIDIA) and tensors this large can be used for im2col
and mul_mat.
For temporary buffers (prealloc_x/y/etc) check against maxStorageBufferRange.
I'm not sure this check is ideal, but we always use these buffers as a single
full size binding and the limit may be smaller than maxMemoryAllocationSize
or maxBufferSize, so I think this is reasonable.
Replace descriptor range uses of VK_WHOLE_SIZE with a manually computed range.
The maxStorageBufferRange may be smaller than the maxBufferSize or
maxMemoryAllocationSize (and the Vulkan spec warns about this in a note) and
it's invalid usage if VK_WHOLE_SIZE computes a range larger than
maxStorageBufferRange.
With this change, it should be possible to generate videos using wan networks
in stable-diffusion.cpp.
* vulkan: Add env var GGML_VK_FORCE_MAX_BUFFER_SIZE and use stoull
When computing sinks, the cm1 shader was looping r from 0 to Br rather than
to rows_per_thread. I must have copied this from the scalar path (where it is
correct), and somehow it wasn't causing failures on current drivers.
* First attempt
* No permute during convert (fixes qk tensors), proper norm application.
* RoPE = NeoX
* Coherence!
* Migrate xielu params from tensors to hyperparameters
* Simple CUDA kernel
* Revert stupid LLM refactorings
* Chat template support
* configchecker / flake8 errors
* Reorder unary.cu
* I do conclude that LLMs are, in fact, stupid.
* Fix after merge
* Final newline
* Make xIELU an UNARY_OP
* Final newline
* Correctly account for parameter shift
* Argh.
* Update ggml/src/ggml-cpu/unary-ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Refactor: remove unused methods, inline and factorize softplus, add const modifiers
* Revert CUDA changes, implement xIELU as a separate OP
* Pesky newline
* Add float2half / half2float for F16 inputs/outputs
* CUDA variants, attempt 2
* Actually, attempt 3
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Missing convert header
* Proper formula and reference for xIELU in the comments.
* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Add tensor mappings for Apertus to global list instead
* Fix lazy on scalars
* Update ggml/src/ggml-cuda/unary.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Add comment about the constraints on positive/negative alpha
* Change `softplus` to `ggml_softplus`
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* HIP: Disable ROCWMMA fatt on CDNA when compiled against ROCWMMA 2.0.0
rocwmma 2.0.0 includes a bug in the code fakeing fp16 accumulation on CDNA
* CUDA: Fix volta condition in ggml_cuda_should_use_wmma_fattn
* Work on rope
* Simplify inplace operation generation and combine mul/add generation
* Work on rope variants
* implement neox rope
* rope complete
* Add sub,div,glu operators
* implement scale op
* Update cpy shader to handle cont/more types
* formatting
* Update test vars printing for rope,rms_norm
* Avoid ROPE hardcoded constants
* Add TODO to change ROPE constants to enum
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fix TODO comment
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit removes the `-dev` suffix from the version string in
CMakeLists.txt and the release script. The version will now be
just be formatted as `MAJOR.MINOR.PATCH`.
This PR adds additional information to an error message when loading backend library via ld_load_library() fails. This helps spotting why backend library did not load (missing library, missing dependency or unresolved symbol etc.).
* vulkan: 64-bit im2col
Add variants of the im2col shaders that use buffer_device_address/buffer_reference,
and use 64-bit address calculations. This is needed for large convolutions used in
stable-diffusion.cpp.
* fix validation error for large im2col
* metal : support mul_mm with src1->type == GGML_TYPE_F16
* metal : support mul_mm_id with src1->type == GGML_TYPE_F16
[no ci]
* metal : mul_mm support ne00 % 32 != 0
* metal : support mul_mm_id with ne00 % 32 != 0
* cont : remove unnecessary unrolls
* cont : simplify data loading
* metal : optimize mul_mm when output bounds checks are not needed
* vulkan: handle mat_mul with A matrix > 4GB
This change splits mat_mul operations with huge A matrix into chunks in the M
dimension. This works well for stable-diffusion use cases where the im2col
matrix has very large M.
Fix the order of setting the stride in mul_mm_cm2 - setting the dimension
clobbers the stride, so stride should be set after.
* build fixes
The "Clamp" spec constant is already based on whether KV is a multiple of Bc,
so use that to control whether bounds checking is performed. Add bounds checking
to the scalar and coopmat1 paths. Coopmat2 didn't need any changes (the K/V
tensors are already optionally clamped, nothing else needed to be changed).
* CUDA: mul_mat_id for mmf for bs <= 64 for f16 and bs <= 32 for f32
This commit adds mul_mat_id support for ncols_dst >= 16. It does this by
packing ncols_dst tiles into the blockDim.y.
My tests on a RTX 3090 show that this is faster than the cuBLAS fallback
for f16 till bs=64, and for f32 till bs=32
* Review: refactor if statement
The dequantize functions are copy/pasted from mul_mm_funcs.comp with very few
changes - add a_offset and divide iqs by 2. It's probably possible to call
these functions from mul_mm_funcs and avoid the duplication, but I didn't go
that far in this change.
* devops: move s390x and ppc64le ci build
we have access to ubuntu-24.04-s390x and ppc64le images now
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: disable ppc64le for now since they have compiler errors
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: stop warnings as errors
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: switch to non-macro flag
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: going the llama macro route
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: add big-endian gguf test models
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: disable ppc64le to test s390x, check test build
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: dup .gguf.inp files for big-endian tests
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: dup .gguf.out files for big-endian too
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: add python setup and endian byteswap
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: pooring thing does not have s390x python3
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: add missing rust compiler for s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: try rust actions runner
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Revert "devops: try rust actions runner"
This reverts commit 3f8db04356033d6c1d7eccc75ca396bc5298250c.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: try a different path for rust
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: dump home directory and user info
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: install gguf-py only
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: missed relative path
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: remove big-endian files since local swapping is working
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: revert test-tokenizer-0 cmakelists
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Fix unicode flags conversion from and to uint16_t
Bitfields are allocated in different order on s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Simplify byteswap command
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Add byteswapping and git-lfs for test-tokenizers-ggml-vocabs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Fix endianness detection in vocab loader
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Disable test-thread-safety on s390x
In this test a model is downloaded,
then immediately loaded to check if more downloads are needed,
and then used for test.
There is no clean way to separate all those steps
to add byteswapping between them, so just skip this test.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Fix q8_0 test in test-quantize-fns
vec_signed uses unexpected rounding mode.
Explicitly use different rounding function.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: add big-endian stories260K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: add s390x test-eval-callback
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: fix test does not exist
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: fix model not found llama-eval-callback
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Fix q3_K dot product error in test-quantize-fns on s390x
Array q8bytes had only 4 elements allocated, but 8 elements accessed.
This lead to write out of bounds and later read of overwritten values out of bounds
and incorrect result.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: re-enable ppc64le for testing
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: activate test-thread-safety for s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: disable ppc64le tests
for some reason it keeps failing test-thread-safety tests and I do not
have a machine that is able to replicate the tests.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* devops: LLAMA_FATAL_WARNINGS=ON
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Correct repository URL for s390x for test-thread-safety model
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Fix fs_get_cache_directory
Ensure it works even if both XDG_CACHE_HOME and HOME are unset.
This might happen in containers.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Re-enable CI for ppc64le
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Fortify ggml_rope_impl
Only memcpy data from sections argument if it's non-NULL.
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* Add TODO in struct unicode_cpt_flags to reimplement it in endian-independent way
* Update URL for big-endian model
* Update .github/workflows/build.yml
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update remaining mentions of BE models to ggml-org/models repo
---------
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@linux.ibm.com>
Co-authored-by: Aleksei Nikiforov <103434461+AlekseiNikiforovIBM@users.noreply.github.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>