| .. |
|
.gitignore
|
tests : .gitignore obj files
|
2024-02-08 09:46:47 +02:00 |
|
CMakeLists.txt
|
llama : add llama_chat_apply_template() (#5538)
|
2024-02-19 10:23:37 +02:00 |
|
get-model.cpp
|
…
|
|
|
get-model.h
|
…
|
|
|
test-autorelease.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
|
test-backend-ops.cpp
|
add some new ops, fix some operators and add batch operations to certain operators. (ggml/747)
|
2024-03-04 10:39:10 +02:00 |
|
test-c.c
|
Nomic Vulkan backend (#4456)
|
2024-01-29 15:50:50 -05:00 |
|
test-chat-template.cpp
|
Add Gemma chat template (#5665)
|
2024-02-22 19:10:21 +01:00 |
|
test-double-float.cpp
|
…
|
|
|
test-grad0.cpp
|
…
|
|
|
test-grammar-parser.cpp
|
ggml, common, examples, tests : fixed type arguments in printf (#5528)
|
2024-02-18 18:20:12 +02:00 |
|
test-llama-grammar.cpp
|
ggml, common, examples, tests : fixed type arguments in printf (#5528)
|
2024-02-18 18:20:12 +02:00 |
|
test-model-load-cancel.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
|
test-opt.cpp
|
code : normalize enum names (#5697)
|
2024-02-25 12:09:09 +02:00 |
|
test-quantize-fns.cpp
|
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range (#5721)
|
2024-02-26 18:28:38 +02:00 |
|
test-quantize-perf.cpp
|
ggml : add mmla kernels for quantized GEMM (#4966)
|
2024-02-11 15:22:33 +02:00 |
|
test-rope.cpp
|
…
|
|
|
test-sampling.cpp
|
sampling: fix top_k <= 0 (#5388)
|
2024-02-08 09:46:30 +01:00 |
|
test-tokenizer-0-falcon.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
|
test-tokenizer-0-falcon.py
|
…
|
|
|
test-tokenizer-0-llama.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
|
test-tokenizer-0-llama.py
|
…
|
|
|
test-tokenizer-1-bpe.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |
|
test-tokenizer-1-llama.cpp
|
ggml : add numa options (#5377)
|
2024-02-16 11:31:07 +02:00 |