llama.cpp/examples/batched.swift
Francis Couture-Harpin 10c3c419e9 Merge branch 'master' into compilade/refactor-kv-cache 2024-06-30 16:04:57 -04:00
..
Sources examples : replace llama_kv_cache_seq_* with llama_past_seq_* 2024-06-11 23:27:04 -04:00
.gitignore examples : add batched.swift + improve CI for swift (#3562) 2023-10-11 06:14:05 -05:00
Makefile `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
Package.swift `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
README.md `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00

README.md

This is a swift clone of examples/batched.

$ make $ ./llama-batched-swift MODEL_PATH [PROMPT] [PARALLEL]