llama.cpp/gguf-py/scripts
teleprint-me b1c922fec7
feat: Add a proto sketch for handling mode vocab metadata
2024-05-27 00:06:39 -04:00
..
__init__.py convert : support models with multiple chat templates (#6588) 2024-04-18 14:49:01 +03:00
gguf-convert-endian.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
gguf-dump.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00
gguf-gen-pre.py chore: Add prototyped CLI options 2024-05-22 19:59:33 -04:00
gguf-new-metadata.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
gguf-registry.py feat: Add a proto sketch for handling mode vocab metadata 2024-05-27 00:06:39 -04:00
gguf-set-metadata.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
hub-model.py feat: Add example script for downloading models 2024-05-25 19:12:34 -04:00
hub-vocab.py refactor: Add function for building and parsing CLI arguments 2024-05-25 14:41:13 -04:00