llama.cpp/examples/parallel
Georgi Gerganov 3750706962
llama : add llama_token_is_eog()
ggml-ci
2024-04-20 16:52:03 +03:00
..
CMakeLists.txt build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
README.md Fix some documentation typos/grammar mistakes (#4032) 2023-11-11 23:04:58 -07:00
parallel.cpp llama : add llama_token_is_eog() 2024-04-20 16:52:03 +03:00

README.md

llama.cpp/example/parallel

Simplified simulation of serving incoming requests in parallel