llama.cpp/models
Piotr Wilkin (ilintar) 34fcc5a4ac
model : Apertus model implementation (#15852)
* First attempt

* No permute during convert (fixes qk tensors), proper norm application.

* RoPE = NeoX

* Coherence!

* Migrate xielu params from tensors to hyperparameters

* Simple CUDA kernel

* Revert stupid LLM refactorings

* Chat template support

* configchecker / flake8 errors

* Reorder unary.cu

* I do conclude that LLMs are, in fact, stupid.

* Fix after merge

* Final newline

* Make xIELU an UNARY_OP

* Final newline

* Correctly account for parameter shift

* Argh.

* Update ggml/src/ggml-cpu/unary-ops.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Refactor: remove unused methods, inline and factorize softplus, add const modifiers

* Revert CUDA changes, implement xIELU as a separate OP

* Pesky newline

* Add float2half / half2float for F16 inputs/outputs

* CUDA variants, attempt 2

* Actually, attempt 3

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Missing convert header

* Proper formula and reference for xIELU in the comments.

* Modify unary-ops.cpp to add the functor-based logic besides the template system to retain optimizations

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tensor mappings for Apertus to global list instead

* Fix lazy on scalars

* Update ggml/src/ggml-cuda/unary.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* Add comment about the constraints on positive/negative alpha

* Change `softplus` to `ggml_softplus`

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-10-02 20:43:22 +03:00
..
templates model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
.editorconfig gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
ggml-vocab-aquila.gguf Work on the BPE tokenizer (#3252) 2023-10-03 09:16:26 +02:00
ggml-vocab-baichuan.gguf Add more tokenizer tests (#3742) 2023-10-24 09:17:17 +02:00
ggml-vocab-bert-bge.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-bert-bge.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-bert-bge.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-command-r.gguf command-r : add BPE pre-tokenization (#7063) 2024-05-05 08:19:30 +03:00
ggml-vocab-command-r.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-command-r.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-coder.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-deepseek-coder.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-coder.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-llm.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-deepseek-llm.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-deepseek-llm.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-falcon.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-falcon.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-falcon.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-gpt-2.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-gpt-2.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-gpt-2.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-gpt-neox.gguf Add more tokenizer tests (#3742) 2023-10-24 09:17:17 +02:00
ggml-vocab-llama-bpe.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-llama-bpe.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-llama-bpe.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-llama-spm.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-llama-spm.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-llama-spm.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-mpt.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-mpt.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-mpt.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-nomic-bert-moe.gguf tests : improve UGM tokenizer test coverage (#13773) 2025-05-25 16:22:29 +02:00
ggml-vocab-phi-3.gguf Per token attributes (#7685) 2024-06-04 09:17:17 +02:00
ggml-vocab-phi-3.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-phi-3.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-qwen2.gguf llama : add BPE pre-tokenization for Qwen2 (#7114) 2024-05-08 15:06:43 +03:00
ggml-vocab-qwen2.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-qwen2.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-refact.gguf tests : add test-tokenizer-0.sh + fix some tokenizers (#7036) 2024-05-04 08:32:32 +03:00
ggml-vocab-refact.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-refact.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-starcoder.gguf llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
ggml-vocab-starcoder.gguf.inp convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00
ggml-vocab-starcoder.gguf.out convert : allow partial update to the chkhsh pre-tokenizer list (#13847) 2025-05-30 12:24:37 +02:00