llama.cpp/gguf-py/gguf
teleprint-me b3a54291cb
Merge branch 'huggingface-hub-api' into auto-model-support
2024-05-25 20:28:40 -04:00
..
__init__.py chore: Apply isort to package gguf init 2024-05-18 14:33:22 -04:00
constants.py Merge branch 'huggingface-hub-api' into auto-model-support 2024-05-25 20:28:40 -04:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00
gguf_writer.py llama : add phi3 128K model support (#7225) 2024-05-21 23:28:32 +03:00
huggingface_hub.py feat: Add static methods for resolving model types and model extensions 2024-05-25 19:11:56 -04:00
lazy.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
tensor_mapping.py llama : add Jina Embeddings architecture (#6826) 2024-05-11 10:46:09 +03:00
vocab.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00