llama.cpp/gguf-py/gguf
itigges22 6075918309 feat: add MTP (Multi-Token Prediction) support for dense Qwen 3.5
Add native MTP support for the dense Qwen 3.5 architecture (0.8B, 2B, 4B, 9B, 27B).

What works:
- MTP graph builder for dense qwen35 (build_mtp_head in qwen35.cpp)
- MTP tensor loading and registration for QWEN35 arch
- GGUF converter handles MTP tensors (mtp.fc, mtp.layers, mtp.norm, etc.)
- Public API: llama_get_mtp_logits(), llama_model_n_mtp_layers()
- Server auto-detects MTP from GGUF metadata
- Speculative state machine for MTP draft token generation
- PR #20075 applied: recurrent state checkpoint/restore for hybrid models
- M-RoPE position check relaxed for speculative re-evaluation
- Windows os.kill fix for gateway process detection

What needs work:
- Speculative verify loop conflicts with tool-calling requests (400 error)
- The recommended fix: bypass the speculative framework entirely and
  implement MTP acceptance directly in the server generation loop
  (no seq_rm/rollback needed since MTP drafts are produced in-graph)
- MTP attention skipped (projection + FFN path only) due to
  inp_out_ids token count mismatch

Tested on: RTX 5060 8GB, Windows 11, CUDA 13.2
Model: Qwen3.5-9B with MTP tensors (Q4_K_M quantization)
Base: llama.cpp b8388
2026-03-17 16:49:22 -04:00
..
scripts ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py feat: add MTP (Multi-Token Prediction) support for dense Qwen 3.5 2026-03-17 16:49:22 -04:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py ggml/gguf : prevent integer overflows (#19856) 2026-02-24 20:17:11 +02:00
gguf_writer.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
tensor_mapping.py llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
utility.py gguf-py : do not align the data start offset (#18291) 2025-12-22 20:25:16 +01:00
vocab.py convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) 2025-12-03 21:15:04 +01:00