llama.cpp/gguf-py/gguf
Bias92 9e1591780d fix: correct EXAONE3 FFN_DOWN tensor mapping prefix
The EXAONE3 FFN_DOWN mapping used prefix "model.layers.h.{bid}.mlp.c_proj"
which is incorrect — EXAONE uses "transformer.h.{bid}.mlp.c_proj" prefix
(matching gpt2/refact/qwen/jais). The correct mapping already exists on
a different line but without the "exaone" comment tag.

This fix:
- Removes the dead/unreachable mapping with wrong prefix "model.layers.h."
- Adds "exaone" tag to the existing correct mapping for documentation

The wrong mapping was never hit at runtime because EXAONE weights use
"transformer.h.{bid}.mlp.c_proj" which was already mapped, but the
dead entry is misleading and could cause confusion.

Signed-off-by: User <user@example.com>
Signed-off-by: Bias92 <pewpewplay315@gmail.com>
2026-03-04 00:42:58 +09:00
..
scripts gguf-py : fix passing non-native endian tensors (editor-gui and new-metadata) (#17553) 2025-11-28 20:53:01 +01:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py llama: Add option to merge gate and exp weights (#19139) 2026-02-26 21:01:08 +08:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py ggml/gguf : prevent integer overflows (#19856) 2026-02-24 20:17:11 +02:00
gguf_writer.py ggml/gguf : prevent integer overflows (#19856) 2026-02-24 20:17:11 +02:00
lazy.py convert : handle compressed-tensors quant method (#17069) 2025-11-09 09:45:50 -05:00
metadata.py llama: introduce support for model-embedded sampling parameters (#17120) 2025-11-25 09:56:07 +08:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py gguf-py : add Numpy MXFP4 de/quantization support (#15111) 2025-08-08 17:48:26 -04:00
tensor_mapping.py fix: correct EXAONE3 FFN_DOWN tensor mapping prefix 2026-03-04 00:42:58 +09:00
utility.py gguf-py : do not align the data start offset (#18291) 2025-12-22 20:25:16 +01:00
vocab.py convert : support latest mistral-common (fix conversion with --mistral-format) (#17712) 2025-12-03 21:15:04 +01:00