The MoE models have a mul_mat_vec with very small m (32, 64, 128) right before
the topk_moe selection. Running multiple rows per wg doesn't utilize the SMs
well. I think even for larger m, f32 is so bandwidth-limited that running
multiple rows doesn't help.
* Fix shader to support 2D workgroup mapping to a single subgroup
* Set required_subgroup_size
topk_moe shader requires static WARP_SIZE and actual subgroup size to match
* vulkan: Reduce temporary memory usage for TOP_K
- Compute row size for the temp buffer based on the output of the first pass.
- Update shader addressing math to use the output row size
- Pass the output row size as "ncols_output", what used to be "ncols_output" is now "k"
For the common case of K=40 and src0=(200000,1,1,1), this reduces the temporary buffer
from about 3.2MB to 500KB.
* vulkan: fix top_k bug when there are ties in the input
I noticed by inspection a bug in the vulkan top_k shader where if the least
value in the top_k appears multiple times we could end up writing those extra
copies out rather than some larger values (if the larger values are on higher
numbered threads).
I rewrote the test verification to handle this case, where the final index set
is not necessarily the same.
* Update tests/test-backend-ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Add nosubs|optimize flags to std::regex constructors to prevent
catastrophic backtracking when processing prompts with repeated
identical characters (e.g., 'A' * 10000).
The nosubs flag disables subgroup capture, significantly reducing
memory usage and backtracking on uniform token sequences
* enabled wmma instructions for most quantizations other than q2k
* fixed the last q2_k test case failure
* address comments: fix out of bound write for RDNA4, add comments after #endif
* clean up rebase: fix ne error in half2
* fix the EditorConfig CI
* Add support for CUMSUM and TRI for CUDA.
* Minor optimizations.
* Correct warp_prefix_inclusive_sum in float2 variant to return float2
* Optimize TRI
* Whitespace
* Fix strides.
* Implement double loop
* Whitespace
* Fix HIP compilation bugs
* Optimizations + big case performance tests
* Implement using CUB with fallback to custom kernel
* Remove error message.
* Fixes from code review
* Comment out CPU-unsupported F16/BF16 cases to fix CI
* Fine, you win :P
* Fix last cast, use NO_DEVICE_CODE and GGML_UNUSED_VARS
* Vary warp-size based on physical warp size
* Add GGML_UNUSED_VARS in tri as well
* Use constexpr and call prefix_inclusive with warp_size template param
* Update ggml/src/ggml-cuda/cumsum.cu
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Apply suggestions from code review
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Change to tid % warp_size
* Fix strides; hardcode mask; add ggml_lane_mask_t
* Missing renames, remove unused get_warp_mask(), explicit calls to ggml_cuda_info()
* Too hasty...
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* feat(wip): Port initial TRI impl from pervious work
The kernel does not work and is not optimized, but the
code compiles and runs, so this will be the starting point
now that the core op has been merged.
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Remove argument for constant val override
This was added in the original draft, but later removed. With this, the
kernel now passes tests.
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Move the ttype conditional to templating to avoid conditional in kernel
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Type fixes
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* feat: Add softplus for metal
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add EXPM1 for metal
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* feat: Add FILL for metal
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Branchless version of tri using _ggml_vec_tri_cmp as a mask
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Remove unused arguments
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* refactor: Use select instead of branch for softplus non-vec
Branch: ggml-cumsum-tri
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit skips the model validation check when the user specifies the
--help option.
The motivation for this is that currently and error is thrown before the
--help could be processed. Now skips validation if params.usage is set,
allowing help to display without requiring --model.
Resolves: https://github.com/ggml-org/llama.cpp/issues/17754
* conversion: use existing local chat_template.jinja file if mistral-format model has one.
* fix --mistral-format mistakenly assuming some <=v7 chat template names are file paths and reading them.
* Update convert_hf_to_gguf.py - change from exists() to is_file()
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
The current approach has several drawbacks. Mostly, when
cross-compiling, invoking the compiler binary directly to query the
machine hardware can behave unexpectedly depending on the toolchain
wrapper (using COMPILER_TARGET, CFLAGS, etc).
As CMake is the official tool to build llama.cpp, I propose to only rely
on it to get those variables (`CMAKE_SYSTEM_NAME` and
`CMAKE_SYSTEM_PROCESSOR`).
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
Previously, cmake was forcing `_WIN32_WINNT=0x0A00` for MinGW builds,
This caused "macro redefined" warnings with toolchains that define the version.
This also removes the `GGML_WIN_VER` variable as it is no longer needed.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
* fix convert_hf_to_gguf.py failing with --mistral-format using later mistral-common versions.
* use get_one_valid_tokenizer_file from mistral-common if available and fallback to old logic otherwise.
* use file name instead of file path for get_one_valid_tokenizer_file.
* fix --mistral-format tokenizer file failing for tokenizers in subdirectories.
* move get_one_valid_tokenizer_file import to avoid nested try-except.
* webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden
Zero pasteLongTextToFileLen should disable the conversion, but it was
overwritten with 2500.
* Apply suggestions from code review
* Update webui build
* llama-server: add router multi-model tests (#17704)
Add 4 test cases for model router:
- test_router_unload_model: explicit model unloading
- test_router_models_max_evicts_lru: LRU eviction with --models-max
- test_router_no_models_autoload: --no-models-autoload flag behavior
- test_router_api_key_required: API key authentication
Tests use async model loading with polling and graceful skip when
insufficient models available for eviction testing.
utils.py changes:
- Add models_max, models_dir, no_models_autoload attributes to ServerProcess
- Handle JSONDecodeError for non-JSON error responses (fallback to text)
* llama-server: update test models to new HF repos
* add offline
* llama-server: fix router LRU eviction test and add preloading
Fix eviction test: load 2 models first, verify state, then load
3rd to trigger eviction. Previous logic loaded all 3 at once,
causing first model to be evicted before verification could occur.
Add module fixture to preload models via ServerPreset.load_all()
and mark test presets as offline to use cached models
* llama-server: fix split model download on Windows
---------
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
Some toolchains do not support linking via pragmas such as:
#pragma comment(lib, "crypt32.lib")
so we need to add the library explicitly.
Signed-off-by: Adrien Gallouët <angt@huggingface.co>