llama.cpp/gguf-py/scripts
brian khuu 3a55ae4d72 gguf-dump.py: dump_metadata() should print to stdout 2024-04-30 19:00:11 +10:00
..
__init__.py convert : support models with multiple chat templates (#6588) 2024-04-18 14:49:01 +03:00
gguf-convert-endian.py gguf-convert-endian.py: refactor convert_byteorder() to use tqdm progressbar 2024-04-30 19:00:11 +10:00
gguf-dump.py gguf-dump.py: dump_metadata() should print to stdout 2024-04-30 19:00:11 +10:00
gguf-new-metadata.py convert : support models with multiple chat templates (#6588) 2024-04-18 14:49:01 +03:00
gguf-set-metadata.py *.py: logging basiconfig refactor to use conditional expression 2024-04-30 19:00:11 +10:00