llama.cpp/include
Gabe Goodhart 7ba463b38c fix: Remove llama_model_is_hybrid_Recurrent public API
https://github.com/ggml-org/llama.cpp/pull/13979#discussion_r2141728423

Branch: HybridRecurrentCache

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-06-17 14:54:19 -06:00
..
llama-cpp.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama.h fix: Remove llama_model_is_hybrid_Recurrent public API 2025-06-17 14:54:19 -06:00