* cleanup : hoist mxfp soa functions
* fix: CI failures — CUDA __device__ init, Metal MXFP supports_op, SoA test assert
Three fixes for CI failures:
1. Remove <cmath> from CUDA/HIP/MUSA section of ggml-common.h — the include
causes NAN/INFINITY to become non-constexpr, breaking __device__ static
table initialization for the MXFP LUTs.
2. Add MXFP type guards to Metal's supports_op: MXFP8/MXFP6 have no Metal
shaders yet (reject all ops), MXFP4 has AoS shaders (MUL_MAT, GET_ROWS)
but no SoA/flash attention support yet (reject FLASH_ATTN_EXT, SET_ROWS).
3. Replace strict assert in test-backend-ops init_tensor_mxfp_soa with a
conditional fallback — when ne2 is not divisible by heads_per_region,
fall back to per-head SoA init instead of crashing.
* fix : correct guard for mxfp cpu dequant functions
* fix: CUDA MXFP LUT init and MXFP flash attention SoA test layout
- Add per-platform GGML_TABLE_NAN/GGML_TABLE_INFINITY macros for MXFP
LUTs — uses __uint_as_float on CUDA to avoid MSVC non-constexpr INFINITY
- Fix init_tensor_mxfp_soa to detect multihead SoA from tensor strides,
matching the KV cache layout for permuted flash attention tests
* fix: CUDA MXFP LUT init — use __builtin_nanf/__builtin_inff for constexpr device tables
CUDA/HIP/MUSA __device__ static tables require constexpr initializers.
Standard NAN/INFINITY macros may expand to non-constexpr expressions
(e.g. MSVC: (float)(1e+300), nvcc: __uint_as_float is not constexpr
for static init). Previous fix attempted __uint_as_float for nvcc and
__builtin_bit_cast for clang — neither worked universally.
Use __builtin_nanf("") and __builtin_inff() which are constexpr on
all target compilers (nvcc, clang for HIP/MUSA, GCC, MSVC). Define
once before the platform #if chain instead of per-platform copies.
* fix: correct E5M2 LUT precision and add converter-vs-LUT validation tests
The kvalues_mxfp8_e5m2 LUT had 50 values with insufficient decimal
precision, causing bitwise mismatches against the IEEE-754 element
converter. Regenerated from ggml_mxfp_fp8_e5m2_to_float() with %.9e
precision for exact float round-trip on all 256 entries.
Also consolidates GGML_TABLE_NAN/GGML_TABLE_INFINITY into a single
definition using __builtin_nanf/__builtin_inff (constexpr on all
target compilers), and adds LUT validation tests to test-quantize-fns
that verify all 5 MXFP element converters match their canonical LUT
values (FP4 E2M1: 16, FP6 E2M3: 64, FP6 E3M2: 64, FP8 E4M3: 256,
FP8 E5M2: 256 — 656 total values verified).
* fix: MSVC compat for GGML_TABLE_NAN/INFINITY — use builtins only on GCC/Clang/nvcc
MSVC does not support __builtin_nanf/__builtin_inff. Use standard
NAN/INFINITY macros on MSVC (which work for regular static tables),
and compiler builtins only on GCC/Clang/nvcc (needed for CUDA
__device__ table constexpr initialization).
* fix: handle nvcc+MSVC host — check __CUDACC__ before _MSC_VER for NAN/INF macros
When nvcc uses MSVC as the host compiler, both _MSC_VER and __CUDACC__
are defined. The previous fix checked _MSC_VER first, giving nvcc the
MSVC NAN/INFINITY macros which are not constexpr for __device__ tables.
Add __CUDACC__ exclusion so nvcc gets __builtin_nanf/__builtin_inff.
* cleanup: remove AoS MXFP6/MXFP8 dequant code — these types are KV-cache-only (SoA)
MXFP6 (E2M3) and MXFP8 (E4M3) exist only for KV cache flash attention,
which uses SoA (Struct-of-Arrays) layout. The AoS dequant functions
(NEON, AVX2, CPU dispatch, generic wrappers) were incorrectly added
and are dead code — no model stores weights in these formats.
Removed:
- AoS NEON dequant: dequantize_row_mxfp{6,8}_neon, _cpu dispatch
- AoS AVX2 dequant: dequantize_row_mxfp{6,8}_avx2, _cpu dispatch
- AoS generic wrappers: dequantize_row_mxfp{6,8}_cpu_generic
- AoS fallback defines in arch-fallback.h
- CPU traits .to_float entries for MXFP6/MXFP8
- MXFP6/MXFP8 from all_types[] in test-backend-ops (no AoS tests)
Kept (correct SoA code):
- All *_soa_* functions (NEON, AVX2, generic, dispatch)
- CPU traits .from_float_soa / .to_float_soa
- Flash attention and SET_ROWS Hadamard test cases
- Scalar reference dequant in ggml-quants.c (test-quantize-fns roundtrip)
- MXFP4 AoS code (upstream model weight support, untouched)
Fixes ARM64 CI failure: GET_ROWS(mxfp6_e2m3) was testing dead AoS code
that had a NEON bug. The test no longer runs because the type is
correctly excluded from AoS test paths.
* test: guard all MXFP types must have SoA traits for flash attention
All MXFP flash attention uses SoA layout exclusively. Test validates:
- ALL MXFP types (MXFP4, MXFP6, MXFP8) have from_float_soa and to_float_soa
- MXFP6/MXFP8 (KV-cache-only) do NOT have AoS CPU to_float
Prevents regression: if someone adds AoS dequant back for MXFP6/MXFP8,
or removes SoA traits from any MXFP type, CI will catch it.
* test: add Hadamard, SoA cross-check, E8M0, and layout offset tests
* test: add MXFP converter edge cases, FP6 packing, E8M0 known-answer tests
Add comprehensive tests to catch the bugs backend implementers hit most:
- Element converter edge cases: subnormals, max finite, saturation, NaN, sign
- FP6 pack/unpack exhaustive round-trip with known-answer byte verification
- E8M0 known-answer decode + HALF vs FULL scale distinction
- E8M0 rounding boundary at sqrt(2) threshold (catches floor-only bugs)
- Converter exhaustive round-trip: quantize(dequantize(i))==i for all formats
- Consolidate duplicate SoA switches into single table in test-backend-ops
* test: add AoS/SoA cross-check, Hadamard pipeline, format spec, and mxfp_rmse
- MXFP4 AoS vs SoA cross-check: two independent code paths, bitwise match
- Full Hadamard pipeline roundtrip: H→quantize→dequant→H for all 3 types
- mxfp_rmse helper: computes sqrt(sum/n), with named pipeline constants
- Block size consistency: verify QK_MXFP{4,8,6} == 32
- EMAX_OFFSET vs format max: validate constants produce valid E8M0
- Edge case LUT validation: expected_bits verified against canonical LUTs
- FP4 E2M1 exhaustive converter round-trip (16/16)
* cleanup: tighten MXFP test comments to match repo conventions
* fix: platform-specific NaN/Infinity for GPU device table initializers
FP8 E4M3/E5M2 LUTs contain NaN/Inf which cannot be constexpr-initialized
in __device__ tables on any CUDA/HIP/MUSA version. No GPU backend uses
these LUTs (they use converter functions instead), so guard them out of
GPU builds entirely. Simplify GGML_TABLE_NAN/INFINITY to CPU-only macros.
|
||
|---|---|---|
| .devops | ||
| .gemini | ||
| .github | ||
| benches | ||
| ci | ||
| cmake | ||
| common | ||
| docs | ||
| examples | ||
| ggml | ||
| gguf-py | ||
| grammars | ||
| include | ||
| licenses | ||
| media | ||
| models | ||
| pocs | ||
| requirements | ||
| scripts | ||
| src | ||
| tests | ||
| tools | ||
| vendor | ||
| .clang-format | ||
| .clang-tidy | ||
| .dockerignore | ||
| .ecrc | ||
| .editorconfig | ||
| .flake8 | ||
| .gitignore | ||
| .gitmodules | ||
| .pre-commit-config.yaml | ||
| AGENTS.md | ||
| AUTHORS | ||
| CLAUDE.md | ||
| CMakeLists.txt | ||
| CMakePresets.json | ||
| CODEOWNERS | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| Makefile | ||
| README.md | ||
| SECURITY.md | ||
| build-xcframework.sh | ||
| convert_hf_to_gguf.py | ||
| convert_hf_to_gguf_update.py | ||
| convert_llama_ggml_to_gguf.py | ||
| convert_lora_to_gguf.py | ||
| flake.lock | ||
| flake.nix | ||
| mypy.ini | ||
| poetry.lock | ||
| pyproject.toml | ||
| pyrightconfig.json | ||
| requirements.txt | ||
README.md
llama.cpp
LLM inference in C/C++
Recent API changes
Hot topics
- guide : using the new WebUI of llama.cpp
- guide : running gpt-oss with llama.cpp
- [FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗
- Support for the
gpt-ossmodel with native MXFP4 format has been added | PR | Collaboration with NVIDIA | Comment - Multimodal support arrived in
llama-server: #12898 | documentation - VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669
- Hugging Face GGUF editor: discussion | tool
Quick start
Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
- Install
llama.cppusing brew, nix or winget - Run with Docker - see our Docker documentation
- Download pre-built binaries from the releases page
- Build from source by cloning this repository - check out our build guide
Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.
Example command:
# Use a local model file
llama-cli -m my_model.gguf
# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUF
Description
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
range of hardware - locally and in the cloud.
- Plain C/C++ implementation without any dependencies
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
- AVX, AVX2, AVX512 and AMX support for x86 architectures
- RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
- Vulkan and SYCL backend support
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
The llama.cpp project is the main playground for developing new features for the ggml library.
Models
Typically finetunes of the base models below are supported as well.
Instructions for adding support for new models: HOWTO-add-model.md
Text-only
- LLaMA 🦙
- LLaMA 2 🦙🦙
- LLaMA 3 🦙🦙🦙
- Mistral 7B
- Mixtral MoE
- DBRX
- Jamba
- Falcon
- Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
- Vigogne (French)
- BERT
- Koala
- Baichuan 1 & 2 + derivations
- Aquila 1 & 2
- Starcoder models
- Refact
- MPT
- Bloom
- Yi models
- StableLM models
- Deepseek models
- Qwen models
- PLaMo-13B
- Phi models
- PhiMoE
- GPT-2
- Orion 14B
- InternLM2
- CodeShell
- Gemma
- Mamba
- Grok-1
- Xverse
- Command-R models
- SEA-LION
- GritLM-7B + GritLM-8x7B
- OLMo
- OLMo 2
- OLMoE
- Granite models
- GPT-NeoX + Pythia
- Snowflake-Arctic MoE
- Smaug
- Poro 34B
- Bitnet b1.58 models
- Flan T5
- Open Elm models
- ChatGLM3-6b + ChatGLM4-9b + GLMEdge-1.5b + GLMEdge-4b
- GLM-4-0414
- SmolLM
- EXAONE-3.0-7.8B-Instruct
- FalconMamba Models
- Jais
- Bielik-11B-v2.3
- RWKV-7
- RWKV-6
- QRWKV-6
- GigaChat-20B-A3B
- Trillion-7B-preview
- Ling models
- LFM2 models
- Hunyuan models
- BailingMoeV2 (Ring/Ling 2.0) models
Multimodal
Bindings
- Python: ddh0/easy-llama
- Python: abetlen/llama-cpp-python
- Go: go-skynet/go-llama.cpp
- Node.js: withcatai/node-llama-cpp
- JS/TS (llama.cpp server client): lgrammel/modelfusion
- JS/TS (Programmable Prompt Engine CLI): offline-ai/cli
- JavaScript/Wasm (works in browser): tangledgroup/llama-cpp-wasm
- Typescript/Wasm (nicer API, available on npm): ngxson/wllama
- Ruby: yoshoku/llama_cpp.rb
- Rust (more features): edgenai/llama_cpp-rs
- Rust (nicer API): mdrokz/rust-llama.cpp
- Rust (more direct bindings): utilityai/llama-cpp-rs
- Rust (automated build from crates.io): ShelbyJenkins/llm_client
- C#/.NET: SciSharp/LLamaSharp
- C#/VB.NET (more features - community license): LM-Kit.NET
- Scala 3: donderom/llm4s
- Clojure: phronmophobic/llama.clj
- React Native: mybigday/llama.rn
- Java: kherud/java-llama.cpp
- Java: QuasarByte/llama-cpp-jna
- Zig: deins/llama.cpp.zig
- Flutter/Dart: netdur/llama_cpp_dart
- Flutter: xuegao-tzx/Fllama
- PHP (API bindings and features built on top of llama.cpp): distantmagic/resonance (more info)
- Guile Scheme: guile_llama_cpp
- Swift srgtuszy/llama-cpp-swift
- Swift ShenghaiWang/SwiftLlama
- Delphi Embarcadero/llama-cpp-delphi
- Go (no CGo needed): hybridgroup/yzma
- Android: llama.android
UIs
(to have a project listed here, it should clearly state that it depends on llama.cpp)
- AI Sublime Text plugin (MIT)
- BonzAI App (proprietary)
- cztomsik/ava (MIT)
- Dot (GPL)
- eva (MIT)
- iohub/collama (Apache-2.0)
- janhq/jan (AGPL)
- johnbean393/Sidekick (MIT)
- KanTV (Apache-2.0)
- KodiBot (GPL)
- llama.vim (MIT)
- LARS (AGPL)
- Llama Assistant (GPL)
- LlamaLib (Apache-2.0)
- LLMFarm (MIT)
- LLMUnity (MIT)
- LMStudio (proprietary)
- LocalAI (MIT)
- LostRuins/koboldcpp (AGPL)
- MindMac (proprietary)
- MindWorkAI/AI-Studio (FSL-1.1-MIT)
- Mobile-Artificial-Intelligence/maid (MIT)
- Mozilla-Ocho/llamafile (Apache-2.0)
- nat/openplayground (MIT)
- nomic-ai/gpt4all (MIT)
- ollama/ollama (MIT)
- oobabooga/text-generation-webui (AGPL)
- PocketPal AI (MIT)
- psugihara/FreeChat (MIT)
- ptsochantaris/emeltal (MIT)
- pythops/tenere (AGPL)
- ramalama (MIT)
- semperai/amica (MIT)
- withcatai/catai (MIT)
- Autopen (GPL)
Tools
- akx/ggify – download PyTorch models from HuggingFace Hub and convert them to GGML
- akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
- crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
- gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
- Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
- unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
- Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
- GPUStack - Manage GPU clusters for running LLMs
- llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
- llama-swap - transparent proxy that adds automatic model switching with llama-server
- Kalavai - Crowdsource end to end LLM deployment at any scale
- llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
- LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
- Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.
Supported backends
| Backend | Target devices |
|---|---|
| Metal | Apple Silicon |
| BLAS | All |
| BLIS | All |
| SYCL | Intel and Nvidia GPU |
| OpenVINO [In Progress] | Intel CPUs, GPUs, and NPUs |
| MUSA | Moore Threads GPU |
| CUDA | Nvidia GPU |
| HIP | AMD GPU |
| ZenDNN | AMD CPU |
| Vulkan | GPU |
| CANN | Ascend NPU |
| OpenCL | Adreno GPU |
| IBM zDNN | IBM Z & LinuxONE |
| WebGPU [In Progress] | All |
| RPC | All |
| Hexagon [In Progress] | Snapdragon |
| VirtGPU | VirtGPU APIR |
Obtaining and quantizing models
The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:
You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, such as ModelScope, by using this CLI argument: -hf <user>/<model>[:quant]. For example:
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. For example, you may opt to downloading model checkpoints from ModelScope or other model sharing communities by setting the environment variable, e.g. MODEL_ENDPOINT=https://www.modelscope.cn/.
After downloading a model, use the CLI tools to run it locally - see below.
llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:
- Use the GGUF-my-repo space to convert to GGUF format and quantize model weights to smaller sizes
- Use the GGUF-my-LoRA space to convert LoRA adapters to GGUF format (more info: https://github.com/ggml-org/llama.cpp/discussions/10123)
- Use the GGUF-editor space to edit GGUF meta data in the browser (more info: https://github.com/ggml-org/llama.cpp/discussions/9268)
- Use the Inference Endpoints to directly host
llama.cppin the cloud (more info: https://github.com/ggml-org/llama.cpp/discussions/9669)
To learn more about model quantization, read this documentation
llama-cli
A CLI tool for accessing and experimenting with most of llama.cpp's functionality.
-
Run in conversation mode
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding
-cnvand specifying a suitable chat template with--chat-template NAMEllama-cli -m model.gguf # > hi, who are you? # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today? # # > what is 1+1? # Easy peasy! The answer to 1+1 is... 2! -
Run in conversation mode with custom chat template
# use the "chatml" template (use -h to see the list of supported templates) llama-cli -m model.gguf -cnv --chat-template chatml # use a custom template llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:' -
Constrain the output with a custom grammar
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:' # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
llama-server
A lightweight, OpenAI API compatible, HTTP server for serving LLMs.
-
Start a local HTTP server with default configuration on port 8080
llama-server -m model.gguf --port 8080 # Basic web UI can be accessed via browser: http://localhost:8080 # Chat completion endpoint: http://localhost:8080/v1/chat/completions -
Support multiple-users and parallel decoding
# up to 4 concurrent requests, each with 4096 max context llama-server -m model.gguf -c 16384 -np 4 -
Enable speculative decoding
# the draft.gguf model should be a small variant of the target model.gguf llama-server -m model.gguf -md draft.gguf -
Serve an embedding model
# use the /embedding endpoint llama-server -m model.gguf --embedding --pooling cls -ub 8192 -
Serve a reranking model
# use the /reranking endpoint llama-server -m model.gguf --reranking -
Constrain all outputs with a grammar
# custom grammar llama-server -m model.gguf --grammar-file grammar.gbnf # JSON llama-server -m model.gguf --grammar-file grammars/json.gbnf
llama-perplexity
A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.
-
Measure the perplexity over a text file
llama-perplexity -m model.gguf -f file.txt # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ... # Final estimate: PPL = 5.4007 +/- 0.67339 -
Measure KL divergence
# TODO
llama-bench
Benchmark the performance of the inference for various parameters.
-
Run default benchmark
llama-bench -m model.gguf # Output: # | model | size | params | backend | threads | test | t/s | # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 | # # build: 3e0ba0e60 (4229)
llama-simple
A minimal example for implementing apps with llama.cpp. Useful for developers.
-
Basic text completion
llama-simple -m model.gguf # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
Contributing
- Contributors can open PRs
- Collaborators will be invited based on contributions
- Maintainers can push to branches in the
llama.cpprepo and merge PRs into themasterbranch - Any help with managing issues, PRs and projects is very appreciated!
- See good first issues for tasks suitable for first contributions
- Read the CONTRIBUTING.md for more information
- Make sure to read this: Inference at the edge
- A bit of backstory for those who are interested: Changelog podcast
Other documentation
Development documentation
Seminal papers and background on the models
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
- LLaMA:
- GPT-3
- GPT-3.5 / InstructGPT / ChatGPT:
XCFramework
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "MyLlamaPackage",
targets: [
.executableTarget(
name: "MyLlamaPackage",
dependencies: [
"LlamaFramework"
]),
.binaryTarget(
name: "LlamaFramework",
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
)
]
)
The above example is using an intermediate build b5046 of the library. This can be modified
to use a different version by changing the URL and checksum.
Completions
Command-line completion is available for some environments.
Bash Completion
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash
Optionally this can be added to your .bashrc or .bash_profile to load it
automatically. For example:
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc
Dependencies
- yhirose/cpp-httplib - Single-header HTTP server, used by
llama-server- MIT license - stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
- nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
- miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
- subprocess.h - Single-header process launching solution for C and C++ - Public domain
