llama.cpp/examples/diffusion
Julius Tischbein 2038101bd9
llama : add `use_direct_io` flag for model loading (#18166)
* Adding --direct-io flag for model loading

* Fixing read_raw() calls

* Fixing Windows read_raw_at

* Changing type off_t to size_t for windows and Renaming functions

* disable direct io when mmap is explicitly enabled

* Use read_raw_unsafe when upload_backend is available, not functional on some devices with Vulkan and SYCL

* Fallback to std::fread in case O_DIRECT fails due to bad address

* Windows: remove const keywords and unused functions

* Update src/llama-mmap.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: jtischbein <jtischbein@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-08 08:35:30 +02:00
..
CMakeLists.txt Support diffusion models: Add Dream 7B (#14644) 2025-07-16 20:03:51 +08:00
README.md models : Added support for RND1 Diffusion Language Model (#17433) 2025-11-24 14:16:56 +08:00
diffusion-cli.cpp llama : add `use_direct_io` flag for model loading (#18166) 2026-01-08 08:35:30 +02:00

README.md

Diffusion Text Generation

This directory contains implementations for Diffusion LLMs (DLLMs)

More Info:

Parameters

The diffusion CLI supports various parameters to control the generation process:

Core Diffusion Parameters

  • --diffusion-steps: Number of diffusion steps (default: 256)
  • --diffusion-algorithm: Algorithm for token selection
    • 0: ORIGIN - Token will be generated in a purely random order from https://arxiv.org/abs/2107.03006.
    • 1: ENTROPY_BASED - Entropy-based selection
    • 2: MARGIN_BASED - Margin-based selection
    • 3: RANDOM - Random selection
    • 4: CONFIDENCE_BASED - Confidence-based selection (default)
    • More documentation here https://github.com/DreamLM/Dream
  • --diffusion-visual: Enable live visualization during generation

Scheduling Parameters

Choose one of the following scheduling methods:

Timestep-based scheduling:

  • --diffusion-eps: Epsilon value for timestep scheduling (e.g., 0.001)

Block-based scheduling:

  • --diffusion-block-length: Block size for block-based scheduling (e.g., 32)

Sampling Parameters

  • --temp: Temperature for sampling (0.0 = greedy/deterministic, higher = more random)
  • --top-k: Top-k filtering for sampling
  • --top-p: Top-p (nucleus) filtering for sampling
  • --seed: Random seed for reproducibility

Model Parameters

  • -m: Path to the GGUF model file
  • -p: Input prompt text
  • -ub: Maximum sequence length (ubatch size)
  • -c: Context size
  • -b: Batch size

Examples

Dream architechture:

llama-diffusion-cli -m dream7b.gguf -p "write code to train MNIST in pytorch" -ub 512 --diffusion-eps 0.001 --diffusion-algorithm 3 --diffusion-steps 256 --diffusion-visual

LLaDA architechture:

llama-diffusion-cli -m llada-8b.gguf -p "write code to train MNIST in pytorch" -ub 512 --diffusion-block-length 32 --diffusion-steps 256 --diffusion-visual

RND1 architecture:

llama-diffusion-cli -m RND1-Base-0910.gguf -p "write code to train MNIST in pytorch" -ub 512 --diffusion-algorithm 1 --diffusion-steps 256 --diffusion-visual --temp 0.5 --diffusion-eps 0.001