llama.cpp/gguf-py/gguf
SmartestWashingMachine 424c579455
convert : support latest mistral-common (fix conversion with --mistral-format) (#17712)
* fix convert_hf_to_gguf.py failing with --mistral-format using later mistral-common versions.

* use get_one_valid_tokenizer_file from mistral-common if available and fallback to old logic otherwise.

* use file name instead of file path for get_one_valid_tokenizer_file.

* fix --mistral-format tokenizer file failing for tokenizers in subdirectories.

* move get_one_valid_tokenizer_file import to avoid nested try-except.
2025-12-03 21:15:04 +01:00
..
scripts
__init__.py
constants.py
gguf.py
gguf_reader.py
gguf_writer.py
lazy.py
metadata.py
py.typed
quants.py
tensor_mapping.py
utility.py
vocab.py convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) 2025-12-03 21:15:04 +01:00