llama.cpp/gguf-py/scripts
teleprint-me b3a54291cb
Merge branch 'huggingface-hub-api' into auto-model-support
2024-05-25 20:28:40 -04:00
..
__init__.py convert : support models with multiple chat templates (#6588) 2024-04-18 14:49:01 +03:00
gguf-convert-endian.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
gguf-dump.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00
gguf-gen-pre.py chore: Add prototyped CLI options 2024-05-22 19:59:33 -04:00
gguf-new-metadata.py gguf-py : add special token modification capability (#7166) 2024-05-09 13:56:00 +03:00
gguf-set-metadata.py convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
hub-model.py feat: Add example script for downloading models 2024-05-25 19:12:34 -04:00
hub-vocab.py refactor: Add function for building and parsing CLI arguments 2024-05-25 14:41:13 -04:00