The EXAONE3 FFN_DOWN mapping used prefix "model.layers.h.{bid}.mlp.c_proj"
which is incorrect — EXAONE uses "transformer.h.{bid}.mlp.c_proj" prefix
(matching gpt2/refact/qwen/jais). The correct mapping already exists on
a different line but without the "exaone" comment tag.
This fix:
- Removes the dead/unreachable mapping with wrong prefix "model.layers.h."
- Adds "exaone" tag to the existing correct mapping for documentation
The wrong mapping was never hit at runtime because EXAONE weights use
"transformer.h.{bid}.mlp.c_proj" which was already mapped, but the
dead entry is misleading and could cause confusion.
Signed-off-by: User <user@example.com>
Signed-off-by: Bias92 <pewpewplay315@gmail.com>
|
||
|---|---|---|
| .. | ||
| scripts | ||
| __init__.py | ||
| constants.py | ||
| gguf.py | ||
| gguf_reader.py | ||
| gguf_writer.py | ||
| lazy.py | ||
| metadata.py | ||
| py.typed | ||
| quants.py | ||
| tensor_mapping.py | ||
| utility.py | ||
| vocab.py | ||