llama.cpp/tests
compilade 5d46babdc2
llama : initial Mamba-2 support (#9126)
* llama : initial Mamba-2 support

* ggml : SIMD ggml_ssm_scan for Mamba-2

* ggml : improve ggml_mul speed when masking recurrent states

* llama : support running Mamba-Codestral-7B-v0.1

* llama : fix Mamba-2 conv state saving

* ggml : make the ggml_mul fast broadcast path more consistently formatted

* llama : remove unused variable

* llama : add missing break

* convert_hf : prefer SentencePiece tokenizer for Mamba-2 when present

The tokenzier.json of Mamba-Codestral-7B-v0.1 otherwise requires
workarounds to work correctly.

* llama : avoid redundant state copy for Mamba 1 and 2

* metal : attempt to adapt SSM_SCAN for Mamba-2

* metal : fix SSM_SCAN pipeline scope

* metal : use log and exp instead of log1pf and expf in SSM_SCAN

* metal : remove unused arguments for SSM_SCAN

The max index is 31, so trimming the arguments is necessary.

* metal : add back n_seqs to SSM_SCAN args

Whoops, this is needed for the offset in the concatenated output.

* metal : fix SSM_SCAN state head offset

* metal : fix wrong number of tokens per sequence in SSM_SCAN

* ggml : remove unused fast broadcast path in GGML_MUL

This was initially added because states were masked with ggml_mul,
but this is no longer done and so this "optimisation" is no longer
necessary, or at least not worth the additional code complexity.

* ggml : avoid multiply by D in GGML_OP_SSM_SCAN

This makes the weight buft detection in src/llama.cpp simpler.

* convert : transpose Mamba-2 A, D and reshape SSM_NORM

This breaks existing conversions of Mamba-2 models
to avoid some reshapes.

Not sure if it's a good idea,
but it makes the graph slightly cleaner.

* llama : more appropriate SSM_SCAN and SSM_CONV buft support checks

* convert : fix flake8 lint

* metal : fix confusion between ; and ,

* metal : add missing args for nb references in ssm_scan_f32_group

* metal : single-user mamba2 inference works

* kv-cache : remove const_cast when setting inputs for s_copy

And also fix multi-user inference for recurrent models
by using cell_id instead of i as the kv cell index
when populating s_copy.

* convert : avoid AutoConfig for Mamba and Mamba2 hparams

* kv-cache : allow context shift for recurrent models

* graph : fix recurrent state copies when avoiding copies

Works, but using lambda functions might not be that clean.

* ggml : fix mamba2 ssm scan when compiled with SVE

* ggml-cpu : reorder SVE FMA for consistency with other SIMD arches

* cuda : implement ssm scan for Mamba2

There is still room for improvement, but it works!

* cuda : adapt Mamba1 ssm scan to shape changes from Mamba2

* mamba : fix mismatched new and delete size for llm_build_mamba

Subclasses of llm_graph_context cannot have extra fields,
because the called destructor is not the one from the subclass.
This otherwise would cause problems when runnning Mamba-(1|2) inference
when compiled -DGGML_SANITIZE_ADDRESS=ON

* cuda : graceful fallback for Mamba-1 models with weird embd size
2025-07-02 13:10:24 -04:00
..
.gitignore tests : gitignore ggml-common.h 2024-03-09 14:17:11 +02:00
CMakeLists.txt llama : add thread safety test (#14035) 2025-06-16 08:11:43 -07:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
test-arg-parser.cpp tests : avoid github urls due to throttling (#13654) 2025-05-20 12:03:17 +02:00
test-autorelease.cpp llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
test-backend-ops.cpp llama : initial Mamba-2 support (#9126) 2025-07-02 13:10:24 -04:00
test-barrier.cpp ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-parser.cpp tests : remove json.hpp from a test (#13880) 2025-05-29 12:17:16 +03:00
test-chat-template.cpp llama-chat : reset glmedge chat template (#13253) 2025-05-02 11:06:09 +02:00
test-chat.cpp llama : allow building all tests on windows when not using shared libs (#13980) 2025-06-09 20:03:09 +02:00
test-double-float.cpp ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
test-gbnf-validator.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-gguf.cpp gguf: fix failure on version == 0 (#13956) 2025-06-01 18:08:05 +02:00
test-grammar-integration.cpp sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
test-grammar-llguidance.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-grammar-parser.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-json-partial.cpp `server`: streaming of tool calls and thoughts when `--jinja` is on (#12379) 2025-05-25 01:48:08 +01:00
test-json-schema-to-grammar.cpp sync : vendor (#13901) 2025-05-30 16:25:45 +03:00
test-llama-grammar.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-log.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-lora-conversion-inference.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
test-model-load-cancel.cpp llama : update llama_model API names (#11063) 2025-01-06 10:55:18 +02:00
test-mtmd-c-api.c mtmd : add C public API (#13184) 2025-05-04 23:43:42 +02:00
test-opt.cpp llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
test-quantize-fns.cpp tests : fix test-quantize-fns to init the CPU backend (#12306) 2025-03-10 14:07:15 +02:00
test-quantize-perf.cpp ggml : inttypes.h -> cinttypes (#0) 2024-11-17 08:30:29 +02:00
test-quantize-stats.cpp docker : do not build tests (#13204) 2025-04-30 10:44:07 +02:00
test-regex-partial.cpp `common`: add partial regex support (#12808) 2025-05-14 19:50:57 +01:00
test-rope.cpp llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
test-sampling.cpp sampling : make sure samplers return at least 1 token (#13822) 2025-05-27 12:07:52 +03:00
test-thread-safety.cpp llama : add thread safety test (#14035) 2025-06-16 08:11:43 -07:00
test-tokenizer-0.cpp llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
test-tokenizer-0.py py : logging and flake8 suppression refactoring (#7081) 2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
test-tokenizer-1-bpe.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-tokenizer-1-spm.cpp cmake : do not include ./src as public for libllama (#13062) 2025-04-24 16:00:10 +03:00
test-tokenizer-random.py llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
test-tokenizers-repo.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00