* Add nemotron GGUF conversion & inference support * Fix formatting issues * Remove unnecessary write_tensors() * Update convert_hf_to_gguf.py Co-authored-by: compilade <git@compilade.net> * Update src/llama.cpp Co-authored-by: compilade <git@compilade.net> * Address comments by @compilade * Replace ggml_mul_mat()->llm_build_lora_mm() * Remove mutable variable * Use for bias tensors * Cover corner case for role_scaling not in config.json --------- Co-authored-by: compilade <git@compilade.net> |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| llama-grammar.cpp | ||
| llama-grammar.h | ||
| llama-impl.h | ||
| llama-sampling.cpp | ||
| llama-sampling.h | ||
| llama-vocab.cpp | ||
| llama-vocab.h | ||
| llama.cpp | ||
| unicode-data.cpp | ||
| unicode-data.h | ||
| unicode.cpp | ||
| unicode.h | ||