Commit Graph

4801 Commits

Author SHA1 Message Date
Georgi Gerganov f95b04a21c
model : fix order kvq -> qkv
ggml-ci
2025-02-19 18:52:20 +02:00
Georgi Gerganov 2eacb4c1bf
graph : simplify attention api
ggml-ci
2025-02-19 18:43:49 +02:00
Georgi Gerganov e17e4b72d1
context : add llama_context_recurrent
ggml-ci
2025-02-19 16:07:27 +02:00
Georgi Gerganov 5f11a5502a
kv-cache : remove llama_kv_cache_i 2025-02-19 14:36:27 +02:00
Georgi Gerganov f5cedbcaaa
kv-cache : prepare for abstraction
ggml-ci
2025-02-18 21:28:58 +02:00
Georgi Gerganov 2bffc2d514
model : pass llama_graph_i as ptr
ggml-ci
2025-02-18 14:57:26 +02:00
Georgi Gerganov 9e50456e19
context : minor simplify
ggml-ci
2025-02-18 14:53:02 +02:00
Georgi Gerganov befe14f06f
llama : reorder encode/decode in sources 2025-02-18 14:47:53 +02:00
Georgi Gerganov bc6f187e9c
cont : use returend tensors from the graph build
ggml-ci
2025-02-18 14:24:17 +02:00
Georgi Gerganov 172f61690c
cont : return important tensors
ggml-ci
2025-02-18 13:48:43 +02:00
Georgi Gerganov c23590319a
graph : add llama_graph_result
ggml-ci
2025-02-18 13:48:21 +02:00
Georgi Gerganov f0d3ff2388
Merge branch 'master' into gg/llama-kv-cache
ggml-ci
2025-02-18 10:14:37 +02:00
Johannes Gäßler 73e2ed3ce3
CUDA: use async data loading for FlashAttention (#11894)
* CUDA: use async data loading for FlashAttention

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-02-17 14:03:24 +01:00
Eve f7b1116af1
update release requirements (#11897) 2025-02-17 12:20:23 +01:00
Antoine Viallon c4d29baf32
server : fix divide-by-zero in metrics reporting (#11915) 2025-02-17 11:25:12 +01:00
Rémy O 2eea03d86a
vulkan: implement several ops relevant for ggml_opt (#11769)
* vulkan: support memset_tensor

* vulkan: support GGML_OP_SUM

* vulkan: implement GGML_OP_ARGMAX

* vulkan: implement GGML_OP_SUB

* vulkan: implement GGML_OP_COUNT_EQUAL

* vulkan: implement GGML_OP_OPT_STEP_ADAMW

* vulkan: fix check_results RWKV_WKV6 crash and memory leaks

* vulkan: implement GGML_OP_REPEAT_BACK

* tests: remove invalid test-backend-ops REPEAT_BACK tests

* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
2025-02-17 07:55:57 +01:00
Xuan-Son Nguyen 0f2bbe6564
server : bump httplib to 0.19.0 (#11908) 2025-02-16 17:11:22 +00:00
standby24x7 fe163d5bf3
common : Fix a typo in help (#11899)
This patch fixes a typo in command help.
prefx -> prefix

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2025-02-16 10:51:13 +01:00
Xuan-Son Nguyen 818a340ea8
ci : fix (again) arm64 build fails (#11895)
* docker : attempt fixing arm64 build on ci

* qemu v7.0.0-28
2025-02-16 10:36:39 +01:00
Jeff Bolz bf42a23d0a
vulkan: support multi/vision rope, and noncontiguous rope (#11902) 2025-02-16 08:52:23 +01:00
Hale Chan c2ea16f260
metal : fix the crash caused by the lack of residency set support on Intel Macs. (#11904) 2025-02-16 08:50:26 +02:00
Johannes Gäßler 6dde178248
scripts: fix compare-llama-bench commit hash logic (#11891) 2025-02-15 20:23:22 +01:00
708-145 fc10c38ded
examples: fix typo in imatrix/README.md (#11884)
* simple typo fixed

* Update examples/imatrix/README.md

---------

Co-authored-by: Tobias Bergmann <tobias.bergmann@gmx.de>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-02-15 21:03:30 +02:00
Adrian Kretz 22885105a6
metal : optimize dequant q6_K kernel (#11892) 2025-02-15 20:39:20 +02:00
Georgi Gerganov c2cd24fbfd
readme : add notice about new package registry (#11890)
* readme : add notice about new package registry

* cont : fix whitespace
2025-02-15 20:29:56 +02:00
Georgi Gerganov 68ff663a04
repo : update links to new url (#11886)
* repo : update links to new url

ggml-ci

* cont : more urls

ggml-ci
2025-02-15 16:40:57 +02:00
Olivier Chafik f355229692
server: fix type promotion typo causing crashes w/ --jinja w/o tools (#11880) 2025-02-15 10:11:36 +00:00
Rémy O fc1b0d0936
vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)
* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem
2025-02-15 09:01:40 +01:00
Michał Moskal 89daa2564f
llguidance build fixes for Windows (#11664)
* setup windows linking for llguidance; thanks @phil-scott-78

* add build instructions for windows and update script link

* change VS Community link from DE to EN

* whitespace fix
2025-02-14 12:46:08 -08:00
lhez 300907b211
opencl: Fix rope and softmax (#11833)
* opencl: fix `ROPE`

* opencl: fix `SOFT_MAX`

* Add fp16 variant

* opencl: enforce subgroup size for `soft_max`
2025-02-14 12:12:23 -07:00
Georgi Gerganov 1d801d27b9
graph : update attn/kv_self names 2025-02-14 17:22:55 +02:00
Georgi Gerganov 828064564c
context : move common inputs to base class
ggml-ci
2025-02-14 16:48:21 +02:00
Diego Devesa 94b87f87b5
cuda : add ampere to the list of default architectures (#11870) 2025-02-14 15:33:52 +01:00
Georgi Gerganov d5e8e1a2ba
context : remove batch_manager
ggml-ci
2025-02-14 16:10:55 +02:00
Georgi Gerganov dbc2ec59b5
docker : drop to CUDA 12.4 (#11869)
* docker : drop to CUDA 12.4

* docker : update readme [no ci]
2025-02-14 14:48:40 +02:00
Daniel Bevenius 3d68f034da
llama : add completion for --chat-template-file (#11860)
This commit adds completion for `--chat-template-file`, enabling only
`.jinja` files to be displayed as completions.

Example usage:
```console
$ ./build/bin/llama-cli --chat-template-file models/templates/<TAB>
models/templates/CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
models/templates/CohereForAI-c4ai-command-r-plus-tool_use.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
models/templates/fireworks-ai-llama-3-firefunction-v2.jinja
models/templates/google-gemma-2-2b-it.jinja
models/templates/llama-cpp-deepseek-r1.jinja
models/templates/meetkai-functionary-medium-v3.1.jinja
models/templates/meetkai-functionary-medium-v3.2.jinja
models/templates/meta-llama-Llama-3.1-8B-Instruct.jinja
models/templates/meta-llama-Llama-3.2-3B-Instruct.jinja
models/templates/meta-llama-Llama-3.3-70B-Instruct.jinja
models/templates/microsoft-Phi-3.5-mini-instruct.jinja
models/templates/mistralai-Mistral-Nemo-Instruct-2407.jinja
models/templates/NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
models/templates/NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
models/templates/Qwen-Qwen2.5-7B-Instruct.jinja
```
This is not limited to the models/templates directory, it can be used
anywhere in the filesystem, the above is just an example.
2025-02-14 11:16:56 +01:00
Jinyang He 38e32eb6a0
ggml: optimize some vec dot functions for LoongArch ASX (#11842)
* Optimize ggml_vec_dot_q3_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q4_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q6_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q5_K_q8_K for LoongArch ASX

* Optimize ggml_vec_dot_q2_K_q8_K for LoongArch ASX

* Optimize mul_sum_i8_pairs_float for LoongArch ASX

* Optimize ggml_vec_dot_iq4_xs_q8_K for LoongArch ASX
2025-02-14 10:54:27 +02:00
Eve a4f011e8d0
vulkan: linux builds + small subgroup size fixes (#11767)
* mm subgroup size

* upload vulkan x86 builds
2025-02-14 02:59:40 +00:00
theraininsky a7b8ce2260
llama-bench : fix unexpected global variable initialize sequence issue (#11832)
* llama-bench : fix unexpected global variable initialize sequence issue

* Update examples/llama-bench/llama-bench.cpp

---------

Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-02-14 02:13:43 +01:00
Georgi Gerganov 04045bb842
readme : minor 2025-02-14 00:16:56 +02:00
Jeffrey Morgan 8a8c4ceb60
llamafile: use member variable instead of constant for iq4nlt (#11780) 2025-02-13 18:05:04 +01:00
Reza Rahemtola c1f958c038
server : (docs) Update wrong tool calling example (#11809)
Call updated to match the tool used in the output just below, following the example in https://github.com/ggerganov/llama.cpp/pull/9639
2025-02-13 17:22:44 +01:00
Georgi Gerganov 131743ff4f
context : abstract constructor and init
ggml-ci
2025-02-13 17:17:51 +02:00
Georgi Gerganov ed3cb55abe
context : abstract input
ggml-ci
2025-02-13 15:53:15 +02:00
Daniel Bevenius c48f630d1c
llama : add --completion-bash option (#11846)
This commit adds a new option `--completion-bash` to the llama.cpp which
outputs a source-able bash completion script.

The motivation for this change is to provide a more user-friendly
experience for users who use the command-line interface of llama.cpp.

This is currently only basic and all options are displayed for all llama
executables but this can be improved in the future if needed.

Example usage:
```console
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

$ ./build/bin/llama-server --m<TAB>
--main-gpu         --mirostat         --mirostat-lr      --model            --multiline-input
--min-p            --mirostat-ent     --mlock            --model-url
```
2025-02-13 14:46:59 +01:00
Georgi Gerganov 107d1e2c32
context : move output functionality to base class
ggml-ci
2025-02-13 15:42:14 +02:00
R0CKSTAR bd6e55bfd3
musa: bump MUSA SDK version to rc3.1.1 (#11822)
* musa: Update MUSA SDK version to rc3.1.1

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: Remove workaround in PR #10042

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-02-13 13:28:18 +01:00
Georgi Gerganov e08f38df69
context : minor cleanup
ggml-ci
2025-02-13 12:50:53 +02:00
Georgi Gerganov f7c7757bab
context : abstract state read/write
ggml-ci
2025-02-13 12:37:28 +02:00
Georgi Gerganov 3a504d9a0b
llama : introduce llama_io interfaces
ggml-ci
2025-02-13 12:25:54 +02:00