Commit Graph

1297 Commits

Author SHA1 Message Date
slaren 1be2b8c19b
ggml : revert change to ggml_cpy, add ggml_cont_Nd instead (#3275)
ggml-ci
2023-09-20 16:12:51 +03:00
Georgi Gerganov 2f3a46fccf
train : make KQ_pos memory buffer permanent via dummy scale op 2023-09-20 14:14:50 +03:00
Georgi Gerganov 54206962c7
llama : disable MPI for now
ggml-ci
2023-09-20 14:07:29 +03:00
slaren e04dc51988
ggml-cuda : add rope f16, restore performance with parallel decoding (#3272)
* ggml-cuda : add rope f16, restore performance

* offload KQ_mask with all models

* fix rope shift

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-20 14:00:28 +03:00
Georgi Gerganov db0fc2da06
simple : improve comments + free batch 2023-09-20 13:54:20 +03:00
Georgi Gerganov b377bf2266
simple : add parallel decoding support 2023-09-20 13:06:34 +03:00
Georgi Gerganov addae65fd4
llama : improve llama_batch API + simplify parallel example 2023-09-20 11:03:18 +03:00
Georgi Gerganov a1327c71c6
parallel : rename hot-plug to continuous-batching 2023-09-20 09:24:41 +03:00
Georgi Gerganov e1067efbfa
llama : fix n_kv to never become 0 2023-09-20 09:17:05 +03:00
Georgi Gerganov 7b7472ee26
parallel : minor 2023-09-20 00:35:10 +03:00
Georgi Gerganov 6028879f56 parallel : print misses on each request 2023-09-19 23:50:05 +03:00
Georgi Gerganov eed3fd4234 parallel : count cache misses 2023-09-19 23:47:47 +03:00
Georgi Gerganov 8a9aca37c1
parallel : remove question with short answers 2023-09-19 23:34:30 +03:00
Georgi Gerganov 4b5f3cd6bf
parallel : process system prompt once + configurable paramters + llama API 2023-09-19 17:00:42 +03:00
Georgi Gerganov 82e20e9ba0 parallel : remove new line from prompt 2023-09-19 13:54:41 +03:00
Georgi Gerganov d37081ae5d
llama : silence errors KV cache errors 2023-09-19 13:42:59 +03:00
Georgi Gerganov 16090a5dde
parallel : fix sequence termination criteria 2023-09-19 13:29:29 +03:00
Georgi Gerganov 806d397c1a
parallel : try smaller batches when the KV cache is fragmented 2023-09-19 13:21:36 +03:00
Georgi Gerganov ddad227782
llama : fix cell_max logic + rename functions 2023-09-19 13:21:12 +03:00
Georgi Gerganov 36714e16d0
parallel : various improvements 2023-09-19 12:29:37 +03:00
Georgi Gerganov 467e307931
simple : fix token counting 2023-09-19 11:45:33 +03:00
Georgi Gerganov 25bd254089
make : add parallel to build + fix static functions in llama.cpp 2023-09-19 11:37:02 +03:00
slaren 7e2b9974d1
ggml-cuda : update rope implementation for parallel decoding (#3254)
* ggml-cuda : update rope implementation for parallel decoding

* better solution for p0 computation

* fix rope

* simpler rope implementation

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-19 11:31:36 +03:00
Georgi Gerganov daf4c6d360
llama : fix worst case graph build 2023-09-19 11:05:08 +03:00
Georgi Gerganov fa0e677820
llama : extend batch API to select which logits to output 2023-09-19 00:24:13 +03:00
Georgi Gerganov 897caccdf4
fixes : speculative KV cache + llama worst-case graph 2023-09-18 22:32:28 +03:00
Georgi Gerganov 466b513851
parallel : disable hot-plug to avoid cache fragmentation 2023-09-18 21:34:20 +03:00
Georgi Gerganov 0161372b9a
parallel : example for serving multiple users in parallel 2023-09-18 20:37:28 +03:00
Georgi Gerganov 1f17ea631c
speculative : fix KV cache management 2023-09-18 19:01:20 +03:00
Georgi Gerganov 7c1bdd0e8a
llama : apply K-cache roping for Falcon and Baichuan 2023-09-18 18:26:05 +03:00
Georgi Gerganov 0cbf3bfef8
llama : add llama_kv_cache_shift_seq + no more context swaps 2023-09-18 18:10:43 +03:00
Georgi Gerganov 86c90e34f5
metal : disable concurrency optimization 2023-09-18 18:00:01 +03:00
Georgi Gerganov f015b26689
llama : more robust cell_max heuristic + wip shift 2023-09-18 17:15:58 +03:00
Georgi Gerganov 4d76d762ef
llama : extend llama_kv_cache API 2023-09-18 15:53:03 +03:00
Georgi Gerganov 6952a460b9
llama : add cell_max heuristic for more efficient kv_cache 2023-09-18 15:31:24 +03:00
Georgi Gerganov 9f42e75489
llama : add new llama_decode() API that works with llama_batch 2023-09-18 14:23:52 +03:00
Georgi Gerganov 58bb5110ca
Merge branch 'master' into custom-attention-mask 2023-09-18 11:15:18 +03:00
Georgi Gerganov d29e76937c
llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
Erik Scholz 7ddf185537
ci : switch cudatoolkit install on windows to networked (#3236) 2023-09-18 02:21:47 +02:00
Johannes Gäßler ee66942d7e
CUDA: fix peer access logic (#3231) 2023-09-17 23:35:20 +02:00
Georgi Gerganov fad56936d4
metal : add rope_f16 kernel + optimize cpy kernels 2023-09-17 23:39:45 +03:00
Georgi Gerganov 1fb033fd85
ggml : ggml_rope now takes a vector with positions instead of n_past 2023-09-17 21:17:10 +03:00
Georgi Gerganov 3b4bab6a38
llama : replace ggml_diag_mask_inf with ggml_add (custom -inf mask) 2023-09-17 19:42:39 +03:00
Georgi Gerganov c5df72e848
tests : verify that RoPE is "additive" 2023-09-17 17:55:12 +03:00
Johannes Gäßler 111163e246
CUDA: enable peer access between devices (#2470) 2023-09-17 16:37:53 +02:00
slaren 8b428c9bc8
llama.cpp : show model size and BPW on load (#3223) 2023-09-17 14:33:28 +02:00
Johannes Gäßler 578d8c8f5c
CUDA: fix scratch malloced on non-main device (#3220) 2023-09-17 14:16:22 +02:00
IsaacDynamo b541b4f0b1
Enable BUILD_SHARED_LIBS=ON on all Windows builds (#3215) 2023-09-16 19:35:25 +02:00
Vlad 5dbc2b3213
Enable build with CUDA 11.0 (make) (#3132)
* CUDA 11.0 fixes

* Cleaner CUDA/host flags separation

Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16 16:55:43 +02:00
goerch b08e75baea
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (#3170)
* Fix für #2721

* Reenable tokenizer test for LLaMa

* Add `console.cpp` dependency

* Fix dependency to `common`

* Fixing wrong fix.

* Make console usage platform specific

Work on compiler warnings.

* Adapting makefile

* Remove trailing whitespace

* Adapting the other parts of the makefile

* Fix typo.

* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1

* Simplify logic

* Add missing change...

* Fix ugly compiler warning

* llama_tokenize should accept strings containing NUL now

* Adding huichen's test case
2023-09-16 13:41:33 +02:00