* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault |
||
|---|---|---|
| .. | ||
| llama-cpp.h | ||
| llama.h | ||
* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault |
||
|---|---|---|
| .. | ||
| llama-cpp.h | ||
| llama.h | ||