llama.cpp/include
Xuan-Son Nguyen cd78e57c3a
lora: count lora nodes in graph_max_nodes (#18469)
* lora: count lora nodes in graph_max_nodes

* 3 nodes per weight

* 4 nodes

* keep track n_lora_nodes from llama_model

* fix assert

* rm redundant header

* common: load adapters before context creation

* use 6 nodes
2025-12-30 15:53:12 +01:00
..
llama-cpp.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama.h lora: count lora nodes in graph_max_nodes (#18469) 2025-12-30 15:53:12 +01:00