llama.cpp/examples
Dean 7ab7b733bb
android : fix utf8 decoding error (#5935)
* examples: fix utf8 decoding error

some models have a tokenizer that decodes an id into an incomplete utf8 sequence, need to validate and wait for next token
one example would be: https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat-GGUF/resolve/main/qwen1_5-1_8b-chat-q4_0.gguf and and an example of the token is 18137

* android : minor

---------

Co-authored-by: zhangfuwen <zhangfuwen@foxmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-10 22:03:17 +02:00
..
baby-llama code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
batched llama : support Mamba Selective State Space Models (#5328) 2024-03-08 17:31:00 -05:00
batched-bench llama : support Mamba Selective State Space Models (#5328) 2024-03-08 17:31:00 -05:00
batched.swift
beam-search
benchmark ggml : remove old quantization functions (#5942) 2024-03-09 15:53:59 +02:00
convert-llama2c-to-ggml
embedding server : normalize embeddings (#5956) 2024-03-09 14:27:58 +02:00
export-lora
finetune code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
gguf
gritlm llama : add support for GritLM (#5959) 2024-03-10 17:56:30 +02:00
imatrix
infill convert : automatically fall back to HfVocab if tokenizer.model doesn't exist (#5821) 2024-03-02 12:27:26 -05:00
jeopardy
llama-bench llama-bench : add embeddings option (#5924) 2024-03-07 16:32:38 +02:00
llama.android android : fix utf8 decoding error (#5935) 2024-03-10 22:03:17 +02:00
llama.swiftui
llava ggml : remove old quantization functions (#5942) 2024-03-09 15:53:59 +02:00
lookahead
lookup
main main : support special tokens as reverse/anti prompt (#5847) 2024-03-04 09:57:20 +02:00
main-cmake-pkg
parallel llama : support Mamba Selective State Space Models (#5328) 2024-03-08 17:31:00 -05:00
passkey llama : fix defrag bugs + add parameter (#5735) 2024-02-27 14:35:51 +02:00
perplexity perplexity : support using multiple sequences to allow larger batch sizes (#5946) 2024-03-09 19:55:54 +01:00
quantize IQ4_XS: a 4.25 bpw quantization (#5747) 2024-02-27 16:34:24 +02:00
quantize-stats
save-load-state
server server: ci: windows build and tests (#5968) 2024-03-10 18:17:47 +01:00
simple
speculative fix speculative decoding build on windows (#5874) 2024-03-04 22:23:06 -05:00
sycl Support multiple GPUs (split mode) on SYCL backend (#5806) 2024-03-02 19:49:30 +08:00
tokenize
train-text-from-scratch code : normalize enum names (#5697) 2024-02-25 12:09:09 +02:00
CMakeLists.txt llama : add support for GritLM (#5959) 2024-03-10 17:56:30 +02:00
Miku.sh
alpaca.sh
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
gpt4all.sh
json-schema-to-grammar.py examples : support minItems/maxItems in JSON grammar converter (#5039) 2024-02-19 16:14:07 +02:00
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
pydantic-models-to-grammar-examples.py
pydantic_models_to_grammar.py
reason-act.sh
server-embd.py server : refactor (#5882) 2024-03-07 11:41:53 +02:00
server-llama2-13B.sh