llama.cpp/gguf-py/gguf
DAN™ fdb17643d3
model : add support for Phi4ForCausalLMV (#20168)
* Add support for Phi4ForCausalLMV.

* Fix Phi-4 vision parity (correcting SigLIP2 patch-kernel export layout) and matching HF NaFlex resize behavior in mtmd.

* Rename contants + fix tokenizer label

* Clean-ups.

* Fix GGUF export.

* Set tokenizer.ggml.pre explicitly.

* Default vocab name rather than forcing it.

* Clean-ups.

* Fix indent.

* Fix subscriptable error.

* remov overcomplicated code path

* Clean-ups.

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2026-03-12 00:25:54 +01:00
..
scripts ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py model : add support for Phi4ForCausalLMV (#20168) 2026-03-12 00:25:54 +01:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py ggml/gguf : prevent integer overflows (#19856) 2026-02-24 20:17:11 +02:00
gguf_writer.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
tensor_mapping.py llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
utility.py gguf-py : do not align the data start offset (#18291) 2025-12-22 20:25:16 +01:00
vocab.py convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) 2025-12-03 21:15:04 +01:00