llama.cpp/tests
Georgi Gerganov fb76ec31a9
ggml : fix YARN + add tests + add asserts (#7617)
* tests : add rope tests

ggml-ci

* ggml : fixes (hopefully)

ggml-ci

* tests : add non-cont tests

ggml-ci

* cuda : add asserts for rope/norm + fix DS2

ggml-ci

* ggml : assert contiguousness

* tests : reduce RoPE tests

ggml-ci
2024-05-29 20:17:31 +03:00
..
.gitignore
CMakeLists.txt llama : lookup word in vocab before doing BPE merges (#7193) 2024-05-11 11:12:06 +03:00
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-autorelease.cpp
test-backend-ops.cpp ggml : fix YARN + add tests + add asserts (#7617) 2024-05-29 20:17:31 +03:00
test-c.c
test-chat-template.cpp Fix phi3 chat template confusion with zephyr (#7449) 2024-05-23 16:15:15 +02:00
test-double-float.cpp
test-grad0.cpp ggml : remove ggml_flash_attn and ggml_flash_ff (#7463) 2024-05-23 10:00:44 +03:00
test-grammar-integration.cpp Add left recursion check: quit early instead of going into an infinite loop (#7083) 2024-05-14 15:25:56 +10:00
test-grammar-parser.cpp
test-json-schema-to-grammar.cpp JSON: [key] -> .at(key), assert() -> GGML_ASSERT (#7143) 2024-05-08 21:53:08 +02:00
test-llama-grammar.cpp
test-model-load-cancel.cpp
test-opt.cpp
test-quantize-fns.cpp tests : include IQ2_XXS and IQ2_XS in test-quantize-fns (#6303) 2024-03-25 19:33:15 +02:00
test-quantize-perf.cpp
test-rope.cpp
test-sampling.cpp
test-tokenizer-0.cpp tests : add test-tokenizer-0.sh + fix some tokenizers (#7036) 2024-05-04 08:32:32 +03:00
test-tokenizer-0.py py : logging and flake8 suppression refactoring (#7081) 2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh tests : fix test-tokenizer-0.sh 2024-05-28 15:04:09 +03:00
test-tokenizer-1-bpe.cpp llama : lookup word in vocab before doing BPE merges (#7193) 2024-05-11 11:12:06 +03:00
test-tokenizer-1-spm.cpp llama : fix BPE pre-tokenization (#6920) 2024-04-29 16:58:41 +03:00
test-tokenizer-random.py Tokenizer WPM fixes (#7500) 2024-05-28 21:46:34 +02:00