* Switched web UI to hash-based routing
* Added hash to missed goto function call
* Removed outdated SPA handling code
* Fixed broken sidebar home link
* vendor : update httplib
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* common : use cpp-httplib as a cURL alternative for downloads
The existing cURL implementation is intentionally left untouched to
prevent any regressions and to allow for safe, side-by-side testing by
toggling the `LLAMA_CURL` CMake option.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* ggml : Bump to Windows 10
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
---------
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* ci : create git tags for released docker images
When releasing a docker image for build number X, we should also create
the corresponding git tag. This allows users to easily checkout the
corresponding source tree for given docker image.
* Update .github/workflows/docker.yml
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update .github/workflows/docker.yml
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Apply suggestion from @CISC
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* CUDA: add a fused top-K MoE kernel
This kernel does the following:
1. softmax over the logits per token [n_experts, n_tokens]
2. argmax reduce over the top-k (n_experts_used) logits
3. write weights + ids to global memory
It is intended as fusion of softmax->top-k->get_rows pipeline for MoE models
* Refactor into ggml_cuda_should_use_topk_moe
* Review: Use better coalescing pattern, use WARP_SIZE, store logits into registers before
* Review: format + micro-optimizations
* Fix bug: fix tie breakers
* Add optional norm + clean-up code
* Use smem for final write
* Add bounds check
* Use better memory pattern for writeback
This commit adds support for passing a prompt file to the model
conversion targets/scripts. It also updates the logits.cpp to print out
embedding information in the same format as when running the original
embedding model.
The motivation for this is that it allows us to pass files of different
sizes when running the converted models and validating the logits.
This can be particularly important when testing the sliding window
functionality of models where the sequence length needs to exceed a
certain number of tokens to trigger the sliding window logic.
This commit adds support for using an externally started llama-server
instance for the server tests. This can be enabled by setting the
DEBUG_EXTERNAL environment variable.
The motivation for this is to allow debugging of the server itself
when investigating a test failure. Instructions for how to do this are
added to the README.md file in the tests directory.
Use RPC_DEBUG environment variable to enable debug messages.
Add helper macro LOG_DBG() which does an early
check of the env var before calling GGML_LOG_DEBUG().
Make sure we log a debug message for every server function.
* run the x64 ci on regular machines
* set up the same thing for arm
fix test-quantize-perf just like #12306
* try to disable sve
* add another sve run
* ggml : make gallocr respect the backend's max buffer size
* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max allocation size in buffer type interface
* fix missing newline, apple-clang warning
* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.
* track (chunk, offset) pairs instead of "global" offsets through gallocr.
* simpler, don't need loops to map between local/global offsets
* touches more code
* fix dyn_tallocr_max_size and initialization
* fix memory leak when buffers are reused due to same buffer type appearing multiple times
* make vbuffer allocation follow the same logic as backend_buffer did before
* continue to use leftover unallocated space of previous chunks after a new one has been created
* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size
* refactor: move adding new free block and new chunk into separate functions
* allocate chunks individually with a separate free-blocks list for each one
* needs a bit more memory/allocations/indirections, but code is simpler
* fix warnings (missing static) & debug checks
* Add power management utilities to NPU device context and update DCVS settings
* Update DCVS settings in power_utils to use v3 API and enhance power management
* wip
* Enhance dequantization functions by adding load_dequant_table support and updating signatures for improved performance
* use lut
* wip
* fix test failure
* wip
* Refactor load_qual_block_generic to improve block handling and optimize vector operations
* Enhance load_dual_block_generic and load_qual_block_generic to accept a mask parameter for improved block handling
* Refactor flash_attn_impl to optimize mask l2 prefetch
* wip
* wip
* wip
* wip
* add log
* link against shared libraries instead of static ones
* fix swiglu
* wip
* refactor expf_fix to handle overflow for different data types
* enhance is_glu_op_supported to validate shapes for multiple sources
* wip
* refactor logging macros to use hexagon namespace and improve formatting
* fix printf format error
* wip
* refactor: update static_assert messages for block size validation and add HVX_VectorPred_x3 type alias
* rename
* feat: enhance fa with mask
* wip
* wip
* refactor: replace instances of Q6_V_vzero() with kZeroV for consistency
* wip
* wip
* wip
* fix: improve address alignment check in HVX_Vector handling
* refactor: streamline vector dot product implementations for improved readability
* refactor: q4k add hvx intrinsic impl
* refactor: enhance dequantize_row_q4_K for clarity and performance
* refactor: optimize scale mask usage in dequantization functions for improved performance
* refactor: optimize dequantize_row_q4_K for intrinsic usage and performance improvements
* refactor: move GLU operation implementation into separated file
* sync after swiglu
* wip
* wip
* wip
* feat: increase prc main thread stack size
* fix: replace hardcoded stack size with NPU_THREAD_STACK_SIZE constant
* wip
* feat: add optimized vector operations for exponential and division with overflow handling
* wip
* feat: refactor exponential function to handle overflow and underflow with improved logic
* wip
* wip
* feat: add vector loading and scaling functions for improved performance in block processing
* wip
* feat: optimize block loading by refactoring scale index handling for improved performance
* use Q6_Vb_vlut32_VbVbR_nomatch instead
* feat: enhance scale loading by adding static assertion and restructuring block handling
* wip
* feat: refactor vec_dot_product_mixed_impl for improved clarity and performance
* wip
* feat: simplify vector loading functions and improve alignment handling
* wip
* feat: enhance scale loading mask with quantization block size validation
* wip
* feat: implement make_scale_load_mask function and refactor vector handling in vec_ops
* feat: enhance load_dual_block_generic to include scale indices for improved vector loading
* revert q8 dequant
* wip
* feat: optimize dequantization functions by removing unnecessary masking and updating lookup methods
* wip
* wip
* add qurt_mutex
* Add DMA transfer class and integrate into thread pool
* Enhance DMA transfer functionality by adding support for multiple descriptors and initiating transfers in parallel
* fix dma crash
* fix failed unit tests
* wip
* use alignas
* Improve DMA transfer error handling and update descriptor completion check
* Fix VTCM cache size calculation in element-wise operations
* Add cache clean operations before DMA transfers in element-wise operations
* reduce cache clean operations
* Refactor DMA transfer functions to support 1D operations and rename for clarity
* Enhance DMA transfer functionality by adding 2D submission support and improving descriptor initialization
* Update read buffer method to support forced invalidation and remove unnecessary invalidation calls in element-wise operations
* wip
* Improve DMA transfer handling in mul_mat_gemv_impl by replacing memcpy with initiate_dma_row_transfer and adding wait_for_dma logic
* fix 2d dma
* feat: add DMA plane cache
* rename
* wip
* use memcpy for debug
* fix cache plane calc
* refactor: remove debug logging from mul_mat_impl and optimize cache handling
* rename
* fix 2d dma type
* refactor: enhance DMA transfer handling in mul_mat_gemv_impl and wait functions
* refactor: optimize DMA transfer handling in mul_mat_gemv_impl and wait functions
* wip
* wip
* move op impl into sub dir
* add log
* fix: correct pointer usage in mul_mat_gemv_impl for next plane access
* fix: improve DMA transfer error handling in mul_mat_impl and mul_mat_gemv_impl
* fix: fix crash by using the entire row bytes
* wip
* wip
* fix: prevent parallelization for scalar src1 in is_mul_mat_supported
* fix: add dimension checks for 2D DMA transfers and fallback to 1D if necessary
* wip
* fix: enable thread barrier for mul multiplication operations
* feat: add synchronization checks for tensor operations and update related functions
* wip
* fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations
* Revert "fix: remove invalidation flag from get_read_buffer calls in element-wise and matrix multiplication operations"
This reverts commit af3441e67e706b2e5122369dc160353796867dd3.
* wip
* wip
* add comment
* wip
This commit adds a leading slash to the paths of root-level files
in the CODEOWNERS file.
The motivation for this is that these might otherwise match files
in subdirectories that have other/additional owners will override them.
Refs: https://github.com/ggml-org/llama.cpp/pull/16209#issuecomment-3326434274
This is a configuration of the hparams in the GraniteHybrid architecture
that devolves to the Granite (or GraniteMoe) architecture (ie Granite 3.x).
It may be used for some models in the Granite 4 family with the
GraniteHybrid architecture acting as a superset arch. Rather than support
it directly in the c++ graph, we simply coerce the architecture flag back
to the correct "granite" or "granitemoe" architecture.
Branch: gabe-l-hart/GraniteNonHybridConversion
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Disable 'performance-enum-size' checking:
Enum 'llama_token_type' uses a larger base type ('unsigned int', size: 4 bytes)
than necessary for its value set, consider using 'std::uint8_t' (1 byte) as the
base type to reduce its size.
* implement set_rows with i32 index
* template fix
* test quantized path
warnings--
* Apply suggestions from code review
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* forgotten name change
* deduplicate cuda/sycl and test-fix
* indent++
* vulkan: support set_rows with i32 index type (#16162)
* disable i32 index for webgpu for now
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>