llama.cpp/examples/batched.swift
Georgi Gerganov 745aa5319b
llama : deprecate llama_kv_self_ API (#14030)
* llama : deprecate llama_kv_self_ API

ggml-ci

* llama : allow llama_memory_(nullptr)

ggml-ci

* memory : add flag for optional data clear in llama_memory_clear

ggml-ci
2025-06-06 14:11:15 +03:00
..
Sources llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
.gitignore
Makefile
Package.swift
README.md

README.md

This is a swift clone of examples/batched.

$ make $ ./llama-batched-swift MODEL_PATH [PROMPT] [PARALLEL]