| .. |
|
CMakeLists.txt
|
server : remove old LLAMA_SERVER_SSL (#16290)
|
2025-09-27 19:17:08 +03:00 |
|
arg.cpp
|
common : use cpp-httplib as a cURL alternative for downloads (#16185)
|
2025-09-26 14:12:19 +03:00 |
|
arg.h
|
common : add common_remote_get_content (#13123)
|
2025-04-26 22:58:12 +02:00 |
|
base64.hpp
|
llava : expose as a shared library for downstream projects (#3613)
|
2023-11-07 00:36:23 +03:00 |
|
build-info.cpp.in
|
cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167)
|
2025-06-13 10:38:52 +02:00 |
|
chat-parser.cpp
|
chat : support Granite model reasoning and tool call (#14864)
|
2025-08-06 20:27:30 +02:00 |
|
chat-parser.h
|
llama-chat : Do not throw when tool parsing fails (#14012)
|
2025-06-14 17:25:15 +01:00 |
|
chat.cpp
|
chat: Fix streaming parser for granite models (#15682)
|
2025-09-19 09:57:30 -06:00 |
|
chat.h
|
chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533)
|
2025-09-08 16:59:48 +02:00 |
|
common.cpp
|
devops: add s390x & ppc64le CI (#15925)
|
2025-09-27 02:03:33 +08:00 |
|
common.h
|
model : add GroveMoE support (#15510)
|
2025-09-25 19:50:28 +02:00 |
|
console.cpp
|
console : utf-8 fix for windows stdin (#9690)
|
2024-09-30 11:23:42 +03:00 |
|
console.h
|
gguf : new file format with flexible meta data (beta) (#2398)
|
2023-08-21 23:07:43 +03:00 |
|
json-partial.cpp
|
sync : vendor (#13901)
|
2025-05-30 16:25:45 +03:00 |
|
json-partial.h
|
sync : vendor (#13901)
|
2025-05-30 16:25:45 +03:00 |
|
json-schema-to-grammar.cpp
|
common : Fix corrupted memory error on json grammar initialization (#16038)
|
2025-09-17 11:08:02 +03:00 |
|
json-schema-to-grammar.h
|
sync : vendor (#13901)
|
2025-05-30 16:25:45 +03:00 |
|
llguidance.cpp
|
llguidance : set tokenizer slices to default (#13424)
|
2025-05-10 17:19:52 +02:00 |
|
log.cpp
|
Implement --log-colors with always/never/auto (#15792)
|
2025-09-05 19:43:59 +01:00 |
|
log.h
|
Implement --log-colors with always/never/auto (#15792)
|
2025-09-05 19:43:59 +01:00 |
|
ngram-cache.cpp
|
ggml : portability fixes for VS 2017 (#12150)
|
2025-03-04 18:53:26 +02:00 |
|
ngram-cache.h
|
llama : use LLAMA_TOKEN_NULL (#11062)
|
2025-01-06 10:52:15 +02:00 |
|
regex-partial.cpp
|
`common`: add partial regex support (#12808)
|
2025-05-14 19:50:57 +01:00 |
|
regex-partial.h
|
`common`: add partial regex support (#12808)
|
2025-05-14 19:50:57 +01:00 |
|
sampling.cpp
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
sampling.h
|
sampling : optimize samplers by reusing bucket sort (#15665)
|
2025-08-31 20:41:02 +03:00 |
|
speculative.cpp
|
sampling : optimize samplers by reusing bucket sort (#15665)
|
2025-08-31 20:41:02 +03:00 |
|
speculative.h
|
server : implement universal assisted decoding (#12635)
|
2025-07-31 14:25:23 +02:00 |