llama.cpp/common
Piotr Wilkin (ilintar) b2426e469e
chat : nemotron thinking & toolcalling support (#15676)
* feat: nemotron thinking & toolcalling support

* Trailing whitespaces

* Corrected template for Nemotron

* Template and parser fixes

* Final template and grammar changes

* Whitespace

* Always do lazy grammar processing since </think> tag will always be there.

* Allow extra content after toolcall

* Whitespace

* New tests: thinking + tools, tools + content, thinking + tools + content (new!)

* Whitespace

* Remove cURL test script
2025-09-05 01:22:22 +02:00
..
CMakeLists.txt cmake : do not search for curl libraries by ourselves (#14613) 2025-07-10 15:29:05 +03:00
arg.cpp Document the new max GPU layers default in help (#15771) 2025-09-04 10:49:44 +01:00
arg.h common : add common_remote_get_content (#13123) 2025-04-26 22:58:12 +02:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in cmake: Add ability to pass in LLAMA_BUILD_NUMBER/COMMIT (#14167) 2025-06-13 10:38:52 +02:00
chat-parser.cpp chat : support Granite model reasoning and tool call (#14864) 2025-08-06 20:27:30 +02:00
chat-parser.h llama-chat : Do not throw when tool parsing fails (#14012) 2025-06-14 17:25:15 +01:00
chat.cpp chat : nemotron thinking & toolcalling support (#15676) 2025-09-05 01:22:22 +02:00
chat.h chat : nemotron thinking & toolcalling support (#15676) 2025-09-05 01:22:22 +02:00
common.cpp llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
common.h server : enable /slots by default and make it secure (#15630) 2025-08-31 20:11:58 +03:00
console.cpp console : utf-8 fix for windows stdin (#9690) 2024-09-30 11:23:42 +03:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
json-partial.cpp sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
json-partial.h sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
json-schema-to-grammar.cpp common : use std::string_view now that we target c++17 (#14319) 2025-06-22 08:37:43 +03:00
json-schema-to-grammar.h sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
llguidance.cpp llguidance : set tokenizer slices to default (#13424) 2025-05-10 17:19:52 +02:00
log.cpp Fix: Compile failure due to Microsoft STL breaking change (#11836) 2025-02-12 21:36:11 +01:00
log.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
ngram-cache.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ngram-cache.h llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
regex-partial.cpp `common`: add partial regex support (#12808) 2025-05-14 19:50:57 +01:00
regex-partial.h `common`: add partial regex support (#12808) 2025-05-14 19:50:57 +01:00
sampling.cpp sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
sampling.h sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative.cpp sampling : optimize samplers by reusing bucket sort (#15665) 2025-08-31 20:41:02 +03:00
speculative.h server : implement universal assisted decoding (#12635) 2025-07-31 14:25:23 +02:00