llama.cpp/include
Johannes Gäßler 39b96f8fe1 ggml: backend-agnostic tensor parallelism 2026-02-05 21:49:34 +01:00
..
llama-cpp.h lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama.h ggml: backend-agnostic tensor parallelism 2026-02-05 21:49:34 +01:00