llama.cpp/gguf-py/gguf
compilade 802cef44bf
convert : parse safetensors directly (#15667)
* convert : parse safetensors directly

* gguf-py : order safetensors tensors by name

Applies to both local and remote safetensors custom parsing.
This matches the behavior of the official safetensors implementation.

* convert : rename from_safetensors_meta to from_local_tensor

For consistency with from_remote_tensor

* convert : fix no-lazy dtypes from direct safetensors
2025-11-09 09:49:40 -05:00
..
scripts gguf-py : add support for endian conversion of BF16 data (#16594) 2025-10-15 22:43:08 +02:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py model : add openPangu-Embedded (#16941) 2025-11-05 10:28:58 +01:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py gguf-py : display the invalid gguf type (#13687) 2025-05-21 16:33:54 +02:00
gguf_writer.py model: add support for qwen3vl series (#16780) 2025-10-30 16:19:14 +01:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py ggml : model card yaml tab->2xspace (#14819) 2025-07-22 19:29:43 +03:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : add Numpy MXFP4 de/quantization support (#15111) 2025-08-08 17:48:26 -04:00
tensor_mapping.py model: add Janus Pro for image understanding (#16906) 2025-11-02 22:08:04 +01:00
utility.py convert : parse safetensors directly (#15667) 2025-11-09 09:49:40 -05:00
vocab.py convert : Make mistral-common dependency optional (#16738) 2025-10-23 15:54:46 +02:00