llama.cpp/gguf-py/gguf
Piotr Wilkin (ilintar) ff55414c42
model : Qwen3 Next (#16095)
* Qwen3 Next - cleaned up version

* Whitespaces and stuff

* Correct minor errors

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Misc. fixes.

* Clean up code, add missing hybrid qualifier

* Did someone transpose the SOLVE_TRI result matrix? Perhaps...

* Whitespace

* Proper tensors for cb calls

* Use llama-graph.h vertical alignment

* BROKEN: chunking

* Set new tensors as inputs.

* Proper chunk logic

* It's the circle of life...

* More shenanigans for n_seq > 1

* Nail in the coffin?

* Fix Windows build

* Eh, one fails on Windows, the other fails on Mac... just use general capture.

* quant : cleanup

* model : cleanup

* qwen3 : cleanup

* cont : cleanup

* cont : cleanup

* ggml : revert change

* qwen3 : cleanup

* cont : cleanup

* Readd cmath

* qwen3 : fix typo

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Usual suspects

* fix my bad suggestion

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-28 12:02:56 +01:00
..
scripts gguf-py : skip endian-conversion of MXFP4 data (#17523) 2025-11-27 11:35:38 +01:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py gguf-py : display the invalid gguf type (#13687) 2025-05-21 16:33:54 +02:00
gguf_writer.py convert : fix big-endian conversion (#17431) 2025-11-25 14:18:16 +01:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py llama: introduce support for model-embedded sampling parameters (#17120) 2025-11-25 09:56:07 +08:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : add Numpy MXFP4 de/quantization support (#15111) 2025-08-08 17:48:26 -04:00
tensor_mapping.py model : Qwen3 Next (#16095) 2025-11-28 12:02:56 +01:00
utility.py convert : parse safetensors directly (#15667) 2025-11-09 09:49:40 -05:00
vocab.py convert : Make mistral-common dependency optional (#16738) 2025-10-23 15:54:46 +02:00