llama.cpp/gguf-py/gguf
DAN™ fdb17643d3
model : add support for Phi4ForCausalLMV (#20168)
* Add support for Phi4ForCausalLMV.

* Fix Phi-4 vision parity (correcting SigLIP2 patch-kernel export layout) and matching HF NaFlex resize behavior in mtmd.

* Rename contants + fix tokenizer label

* Clean-ups.

* Fix GGUF export.

* Set tokenizer.ggml.pre explicitly.

* Default vocab name rather than forcing it.

* Clean-ups.

* Fix indent.

* Fix subscriptable error.

* remov overcomplicated code path

* Clean-ups.

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2026-03-12 00:25:54 +01:00
..
scripts ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
__init__.py
constants.py model : add support for Phi4ForCausalLMV (#20168) 2026-03-12 00:25:54 +01:00
gguf.py
gguf_reader.py ggml/gguf : prevent integer overflows (#19856) 2026-02-24 20:17:11 +02:00
gguf_writer.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
py.typed
quants.py ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
tensor_mapping.py llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
utility.py gguf-py : do not align the data start offset (#18291) 2025-12-22 20:25:16 +01:00
vocab.py convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) 2025-12-03 21:15:04 +01:00