llama.cpp/include
Johannes Gäßler a0d9dd20ee ggml: backend-agnostic tensor parallelism 2026-02-11 14:12:33 +01:00
..
llama-cpp.h lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama.h ggml: backend-agnostic tensor parallelism 2026-02-11 14:12:33 +01:00