llama.cpp/gguf-py/gguf
stefan d9452267a0 fix: QWEN2MOE support for expert_feed_forward_length
previously, expert ff was taken from n_ff (intermediate size) but it is now properly taken from LLM_KV_EXPERT_FEED_FORWARD_LENGTH

n_ff_exp and n_ff_shexp are now properly calculated
2024-06-14 11:38:12 +00:00
..
__init__.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
constants.py fix: QWEN2MOE support for expert_feed_forward_length 2024-06-14 11:38:12 +00:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
gguf_writer.py fix: QWEN2MOE support for expert_feed_forward_length 2024-06-14 11:38:12 +00:00
lazy.py convert-hf : support direct Q8_0 conversion (#7234) 2024-05-13 14:10:51 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : fix and simplify quantized shape round-trip (#7483) 2024-05-25 11:11:48 +10:00
tensor_mapping.py llama : add jina v2 base code (#7596) 2024-06-06 10:22:41 +03:00
vocab.py Move convert.py to examples/convert-legacy-llama.py (#7430) 2024-05-30 21:40:00 +10:00