* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault |
||
|---|---|---|
| .. | ||
| cmake | ||
| include | ||
| src | ||
| .gitignore | ||
| CMakeLists.txt | ||
* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault |
||
|---|---|---|
| .. | ||
| cmake | ||
| include | ||
| src | ||
| .gitignore | ||
| CMakeLists.txt | ||