* llama : remove the separate scale tensors of BitNet b1.58 They won't be needed, since the remaining ternary quant types have built-in scales. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| constants.py | ||
| gguf.py | ||
| gguf_reader.py | ||
| gguf_writer.py | ||
| lazy.py | ||
| metadata.py | ||
| py.typed | ||
| quants.py | ||
| tensor_mapping.py | ||
| utility.py | ||
| vocab.py | ||