| .. |
|
batched
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
batched-bench
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
batched.swift
|
llama : llama_perf + option to disable timings during decode (#9355)
|
2024-09-13 09:53:38 +03:00 |
|
convert-llama2c-to-ggml
|
…
|
|
|
cvector-generator
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
deprecation-warning
|
Update deprecation-warning.cpp (#10619)
|
2024-12-04 23:19:20 +01:00 |
|
embedding
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
eval-callback
|
…
|
|
|
export-lora
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
gbnf-validator
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
gen-docs
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
gguf
|
…
|
|
|
gguf-hash
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
gguf-split
|
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797)
|
2024-12-12 19:02:49 +01:00 |
|
gritlm
|
…
|
|
|
imatrix
|
make : deprecate (#10514)
|
2024-12-02 21:22:53 +02:00 |
|
infill
|
readme : add option, update default value, fix formatting (#10271)
|
2024-12-03 12:50:08 +02:00 |
|
jeopardy
|
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
llama-bench
|
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797)
|
2024-12-12 19:02:49 +01:00 |
|
llama.android
|
llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745)
|
2024-10-18 23:18:01 +02:00 |
|
llama.swiftui
|
llama : use cmake for swift build (#10525)
|
2024-12-08 13:14:54 +02:00 |
|
llava
|
clip : add sycl support (#10574)
|
2024-12-04 01:26:37 +01:00 |
|
lookahead
|
…
|
|
|
lookup
|
…
|
|
|
main
|
readme : add option, update default value, fix formatting (#10271)
|
2024-12-03 12:50:08 +02:00 |
|
main-cmake-pkg
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
parallel
|
…
|
|
|
passkey
|
…
|
|
|
perplexity
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
quantize
|
Update README.md (#10772)
|
2024-12-11 16:16:32 +01:00 |
|
quantize-stats
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
retrieval
|
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797)
|
2024-12-12 19:02:49 +01:00 |
|
rpc
|
ggml : move CPU backend to a separate file (#10144)
|
2024-11-03 19:34:08 +01:00 |
|
run
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
save-load-state
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
server
|
common : improve -ctv -ctk CLI arguments (#10806)
|
2024-12-12 22:53:05 +01:00 |
|
simple
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
simple-chat
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
speculative
|
ggml : move AMX to the CPU backend (#10570)
|
2024-11-29 21:54:58 +01:00 |
|
speculative-simple
|
…
|
|
|
sycl
|
[SYCL]set context default value to avoid memory issue, update guide (#9476)
|
2024-09-18 08:30:31 +08:00 |
|
tokenize
|
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797)
|
2024-12-12 19:02:49 +01:00 |
|
CMakeLists.txt
|
common : improve -ctv -ctk CLI arguments (#10806)
|
2024-12-12 22:53:05 +01:00 |
|
Miku.sh
|
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat-13B.bat
|
Create chat-13B.bat (#592)
|
2023-03-29 20:21:09 +03:00 |
|
chat-13B.sh
|
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat-persistent.sh
|
…
|
|
|
chat-vicuna.sh
|
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
chat.sh
|
…
|
|
|
convert_legacy_llama.py
|
…
|
|
|
json_schema_pydantic_example.py
|
…
|
|
|
json_schema_to_grammar.py
|
…
|
|
|
llama.vim
|
…
|
|
|
llm.vim
|
…
|
|
|
pydantic_models_to_grammar.py
|
…
|
|
|
pydantic_models_to_grammar_examples.py
|
…
|
|
|
reason-act.sh
|
…
|
|
|
regex_to_grammar.py
|
…
|
|
|
server-llama2-13B.sh
|
`build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809)
|
2024-06-13 00:41:52 +01:00 |
|
server_embd.py
|
…
|
|
|
ts-type-to-grammar.sh
|
JSON schema conversion: ⚡️ faster repetitions, min/maxLength for strings, cap number length (#6555)
|
2024-04-12 19:43:38 +01:00 |