llama.cpp/examples/lookup
Georgi Gerganov 254098a279
common : refactor common_sampler + grammar logic changes (#17937)
* common : refactor common_sampler + grammar logic changes

* tests : increase max_tokens to get needed response

* batched : fix uninitialized samplers
2025-12-14 10:11:13 +02:00
..
CMakeLists.txt
README.md
lookup-create.cpp common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
lookup-merge.cpp
lookup-stats.cpp common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
lookup.cpp common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00

README.md

llama.cpp/examples/lookup

Demonstration of Prompt Lookup Decoding

https://github.com/apoorvumang/prompt-lookup-decoding

The key parameters for lookup decoding are ngram_min, ngram_max and n_draft. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.

More info:

https://github.com/ggml-org/llama.cpp/pull/4484 https://github.com/ggml-org/llama.cpp/issues/4226