llama.cpp/gguf-py/gguf
Xuan-Son Nguyen d34ff7eb5b
model: mistral small 4 support (#20649)
* model: mistral small 4 support

* fix test

* fix test (2)

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* change newline

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-03-17 00:31:14 +01:00
..
scripts ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py model: mistral small 4 support (#20649) 2026-03-17 00:31:14 +01:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py ggml/gguf : prevent integer overflows (#19856) 2026-02-24 20:17:11 +02:00
gguf_writer.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
tensor_mapping.py llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
utility.py gguf-py : do not align the data start offset (#18291) 2025-12-22 20:25:16 +01:00
vocab.py convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) 2025-12-03 21:15:04 +01:00