llama.cpp/models
Galunid 36eed0c42c
stablelm : StableLM support (#3586)
* Add support for stablelm-3b-4e1t
* Supports GPU offloading of (n-1) layers
2023-11-14 11:17:12 +01:00
..
.editorconfig
ggml-vocab-aquila.gguf
ggml-vocab-baichuan.gguf
ggml-vocab-falcon.gguf
ggml-vocab-gpt-neox.gguf
ggml-vocab-llama.gguf
ggml-vocab-mpt.gguf
ggml-vocab-refact.gguf
ggml-vocab-stablelm-3b-4e1t.gguf stablelm : StableLM support (#3586) 2023-11-14 11:17:12 +01:00
ggml-vocab-starcoder.gguf