llama.cpp/examples
aryantandon01 f112d198cd
Update deprecation-warning.cpp (#10619)
Fixed Path Separator Handling for Cross-Platform Support (Windows File Systems)
2024-12-04 23:19:20 +01:00
..
batched ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
batched-bench ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
batched.swift
convert-llama2c-to-ggml make : deprecate (#10514) 2024-12-02 21:22:53 +02:00
cvector-generator ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
deprecation-warning Update deprecation-warning.cpp (#10619) 2024-12-04 23:19:20 +01:00
embedding ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
eval-callback ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
export-lora ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gbnf-validator ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gen-docs ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-hash ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gguf-split ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
gritlm ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
imatrix make : deprecate (#10514) 2024-12-02 21:22:53 +02:00
infill readme : add option, update default value, fix formatting (#10271) 2024-12-03 12:50:08 +02:00
jeopardy
llama-bench ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
llama.android llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
llama.swiftui
llava clip : add sycl support (#10574) 2024-12-04 01:26:37 +01:00
lookahead ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
lookup ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
main readme : add option, update default value, fix formatting (#10271) 2024-12-03 12:50:08 +02:00
main-cmake-pkg ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
parallel ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
passkey ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
perplexity ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
quantize ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
quantize-stats ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
retrieval ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
rpc
run ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
save-load-state ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
server server : fix speculative decoding with context shift (#10641) 2024-12-04 22:38:20 +02:00
simple ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
simple-chat ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
speculative ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
speculative-simple ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
sycl
tokenize ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
CMakeLists.txt cmake : enable warnings in llama (#10474) 2024-11-26 14:18:08 +02:00
Miku.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh