llama.cpp/common
pestopoppa 49162df87a feat: add --moe-n-expert flag for MoE expert count override (Hard Mask)
Add ability to reduce the number of active experts in MoE models at runtime,
providing significant speedup with minimal quality loss when using 50% of
default experts.

Implementation:
- Add moe_n_expert_override parameter to llama_context_params
- Add --moe-n-expert CLI flag to override n_expert_used
- Implement "Hard Mask" in build_moe_ffn() that slices expert tensors
- Uses ggml_view_2d/3d + ggml_cont to reduce actual computation

Benchmark results (AOCL BLIS 5.0, AMD EPYC 9655):
- Qwen3-Coder-480B-A35B: 2.5 → 3.7 t/s (48% speedup)
- GLM-4.6-355B-A32B: 2.2 → 3.0 t/s (36% speedup)
- Qwen3-Coder-30B-A3B: 26.6 → 33.6 t/s (26% speedup)
- Qwen3-VL-30B-A3B: 32.2 → 38.9 t/s (21% speedup)

Quality: Excellent at 50% experts, degraded at 25%, gibberish at 12.5%

Usage: llama-cli -m model.gguf --moe-n-expert 4 -p "prompt"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 13:32:50 +01:00
..
CMakeLists.txt server: add presets (config) when using multiple models (#17859) 2025-12-10 22:18:21 +01:00
arg.cpp feat: add --moe-n-expert flag for MoE expert count override (Hard Mask) 2025-12-14 13:32:50 +01:00
arg.h server: add presets (config) when using multiple models (#17859) 2025-12-10 22:18:21 +01:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167) 2025-06-13 10:38:52 +02:00
chat-parser-xml-toolcall.cpp Fix Kimi-K2 tool-call parsing issues (#17376) 2025-12-08 14:32:04 +01:00
chat-parser-xml-toolcall.h Fix Kimi-K2 tool-call parsing issues (#17376) 2025-12-08 14:32:04 +01:00
chat-parser.cpp Fix Kimi-K2 tool-call parsing issues (#17376) 2025-12-08 14:32:04 +01:00
chat-parser.h common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
chat-peg-parser.cpp common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
chat-peg-parser.h common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
chat.cpp common : add parser for ministral/mistral large 3/devstral 2 (#17713) 2025-12-09 17:31:04 -06:00
chat.h chat : reserve memory in compute_diffs and improve naming (#17729) 2025-12-03 17:22:10 +02:00
common.cpp feat: add --moe-n-expert flag for MoE expert count override (Hard Mask) 2025-12-14 13:32:50 +01:00
common.h feat: add --moe-n-expert flag for MoE expert count override (Hard Mask) 2025-12-14 13:32:50 +01:00
console.cpp cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
console.h cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
download.cpp common : add minimalist multi-thread progress bar (#17602) 2025-12-12 12:44:35 +01:00
download.h server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
http.h common: introduce http.h for httplib-based client (#16373) 2025-10-01 20:22:18 +03:00
json-partial.cpp common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
json-partial.h sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
json-schema-to-grammar.cpp Server: Change Invalid Schema from Server Error (500) to User Error (400) (#17572) 2025-12-02 17:33:50 +01:00
json-schema-to-grammar.h common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
llguidance.cpp llguidance : set tokenizer slices to default (#13424) 2025-05-10 17:19:52 +02:00
log.cpp cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
log.h cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
ngram-cache.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ngram-cache.h llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
peg-parser.cpp common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
peg-parser.h common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
preset.cpp server: add presets (config) when using multiple models (#17859) 2025-12-10 22:18:21 +01:00
preset.h server: add presets (config) when using multiple models (#17859) 2025-12-10 22:18:21 +01:00
regex-partial.cpp `common`: add partial regex support (#12808) 2025-05-14 19:50:57 +01:00
regex-partial.h `common`: add partial regex support (#12808) 2025-05-14 19:50:57 +01:00
sampling.cpp common : more accurate sampling timing (#17382) 2025-11-20 13:40:10 +02:00
sampling.h sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative.cpp sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative.h server : implement universal assisted decoding (#12635) 2025-07-31 14:25:23 +02:00
unicode.cpp common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
unicode.h common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00