llama.cpp/examples/model-conversion/scripts/causal/modelcard.template

14 lines
190 B
Plaintext

---
base_model:
- {base_model}
---
# {model_name} GGUF
Recommended way to run this model:
```sh
llama-server -hf {namespace}/{model_name}-GGUF -c 0
```
Then, access http://localhost:8080