llama.cpp/examples
LostRuins Concedo 90a0349349
recommended way to check if the version is 0.3, as requested by ngxson
recommended way to check if the version is 0.3, as requested by ngxson
2025-01-19 21:43:59 +08:00
..
batched llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
batched-bench llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
batched.swift llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
convert-llama2c-to-ggml llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
cvector-generator llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
embedding llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
eval-callback llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
export-lora llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
gbnf-validator llama : minor grammar refactor (#10897) 2024-12-19 17:42:13 +02:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-hash GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
gguf-split ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
gritlm llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
imatrix llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
infill llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
jeopardy `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
llama-bench rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
llama.android llama.android: add field formatChat to control whether to parse special tokens when send message (#11270) 2025-01-17 14:57:56 +02:00
llama.swiftui llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llava llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
lookahead llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
lookup llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
main cli : auto activate conversation mode if chat template is available (#11214) 2025-01-13 20:18:12 +01:00
main-cmake-pkg ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
parallel llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
passkey llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
perplexity llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
quantize ci : use -no-cnv in gguf-split tests (#11254) 2025-01-15 18:28:35 +02:00
quantize-stats llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
retrieval llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
rpc rpc-server : add support for the SYCL backend (#10934) 2024-12-23 10:39:30 +02:00
run Reset color before we exit (#11205) 2025-01-12 18:23:10 +00:00
save-load-state llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
server server : Improve code snippets direction between RTL text (#11221) 2025-01-14 11:39:33 +01:00
simple llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
simple-chat llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
speculative llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
speculative-simple llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
sycl [SYCL]set context default value to avoid memory issue, update guide (#9476) 2024-09-18 08:30:31 +08:00
tokenize llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
tts recommended way to check if the version is 0.3, as requested by ngxson 2025-01-19 21:43:59 +08:00
CMakeLists.txt tts : add OuteTTS support (#10784) 2024-12-18 19:27:21 +02:00
Miku.sh `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-13B.bat Create chat-13B.bat (#592) 2023-03-29 20:21:09 +03:00
chat-13B.sh `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat-persistent.sh scripts : fix pattern and get n_tokens in one go (#10221) 2024-11-09 09:06:54 +02:00
chat-vicuna.sh `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
chat.sh `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
llama.vim llama.vim : bump generation time limit to 3s [no ci] 2024-10-23 17:16:56 +03:00
llm.vim llm.vim : stop generation at multiple linebreaks, bind to <F2> (#2879) 2023-08-30 09:50:55 +03:00
pydantic_models_to_grammar.py pydantic : replace uses of __annotations__ with get_type_hints (#8474) 2024-07-14 19:51:21 -04:00
pydantic_models_to_grammar_examples.py examples : Rewrite pydantic_models_to_grammar_examples.py (#8493) 2024-07-20 22:09:17 -04:00
reason-act.sh `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
regex_to_grammar.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
server-llama2-13B.sh `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
server_embd.py py : type-check all Python scripts with Pyright (#8341) 2024-07-07 15:04:39 -04:00
ts-type-to-grammar.sh JSON schema conversion: ️ faster repetitions, min/maxLength for strings, cap number length (#6555) 2024-04-12 19:43:38 +01:00