Commit Graph

6148 Commits

Author SHA1 Message Date
ryan-mangeno db4f5656e4 added mask check in vocab 2025-09-12 11:45:02 -04:00
ryan-mangeno 20d448a8d7 cleanup 2025-09-11 16:42:41 -04:00
ryan-mangeno 4e7c8793ae fixed asser for equal ubatch seq 2025-09-11 16:41:04 -04:00
ryan-mangeno 2bacfb0bc2 alternating rope implemented and modern bert graph build succeeds 2025-09-11 16:37:18 -04:00
ryan-mangeno e296a0b6e6 starting to work, and some cleanup, currently failing on last layer construction in graph build 2025-09-08 15:38:13 -04:00
ryan-mangeno 044bc7d5cd some cleanup and now fails on build attn 2025-09-08 12:21:18 -04:00
ryan-mangeno e101005d1a working on swa with local and global alternating attention 2025-09-07 21:00:38 -04:00
ryan-mangeno 39c029144b fixed pre tokenizer 2025-09-03 14:34:51 -04:00
ryan-mangeno 6d86944cb4 working through previous attemp, implimented more accurate conversion per previous attempt, added local sliding window attention that alternates every third layer 2025-09-03 14:32:39 -04:00
ryan-mangeno ca353d37b4 fixed pre tokenizer and still working through previous pr 2025-09-02 12:26:20 -04:00
ryan-mangeno c73eb685fd added cls token per previous modern bert attempt, still working on checking out the rest 2025-08-29 12:15:31 -04:00
ryan-mangeno 2a1c75047c ubatch issues, the assert for checking equal seqs in llama-graph.cpp when building attention keeps failing, setting ubatch size to 1 when running llama-embedding with --ubatch-size 1 makes it work, but needs to be looked into more 2025-08-28 12:59:42 -04:00
ryan-mangeno 853f344cfe more cleanup 2025-08-28 12:47:10 -04:00
ryan-mangeno 40249dd5ec cleanup 2025-08-28 12:37:02 -04:00
ryan-mangeno 9805635c12 cleanup 2025-08-28 12:36:26 -04:00
ryan-mangeno 8f328431a1 cleanup 2025-08-28 12:33:52 -04:00
ryan-mangeno bffe3c9092 tensor debugging now works -> (llama-eval-callback), instead of simulated gate split with views, GEGLU is now used which does exactly this 2025-08-28 11:15:10 -04:00
ryan-mangeno 18c0c23ed8 fixed tensor mappings and working on buildin graph 2025-08-27 15:32:20 -04:00
ryan-mangeno 4ceb828112 correct tensor shape for qkv 2025-08-26 13:03:14 -04:00
ryan-mangeno cc3d7abab4 continuing 2025-08-26 12:38:38 -04:00
ryan-mangeno 41b6864333 cleanup 2025-08-26 12:33:11 -04:00
ryan-mangeno cc40378d27 some cleanup 2025-08-25 16:31:08 -04:00
ryan-mangeno ac67fc6887 working on support, now working on building graph 2025-08-25 16:15:40 -04:00
ryan-mangeno 6643c5a852 conversion now working, hf -> gguf 2025-08-21 12:42:32 -04:00
ryan-mangeno 6151592ea7 constants and tensor mappings for modern bert support, model not supported yet but working on getting conversion to work for encoder only 2025-08-21 12:38:04 -04:00
David Zhao 79c1160b07
cuda: refactored ssm_scan and use CUB (#13291)
* cuda: refactored ssm_scan to use CUB

* fixed compilation error when when not using CUB

* assign L to constant and use size_t instead of int

* deduplicated functions

* change min blocks per mp to 1

* Use cub load and store warp transpose

* suppress clang warning
2025-08-09 20:29:43 +02:00
Aman Gupta 34c9d765bf
CUDA: add attention sinks for tile and wmma (#15178)
* CUDA: add attention sinks for tile and wmma

* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
2025-08-09 20:00:24 +08:00
compilade e54d41befc
gguf-py : add Numpy MXFP4 de/quantization support (#15111)
* gguf-py : add MXFP4 de/quantization support

* ggml-quants : handle zero amax for MXFP4
2025-08-08 17:48:26 -04:00
Johannes Gäßler 4850b52aed
server-bench: external OAI servers, sqlite (#15179)
* server-bench: external OAI servers, sqlite

* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update scripts/server-bench.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* raise_for_status

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-08-08 23:04:36 +02:00
AN Long cd6983d56d
ggml : fix field name when new ggml_backend (#14944) 2025-08-08 14:37:22 +02:00
Olivier Chafik 6c7e9a5440
vendor: sync minja (#15161)
* vendor: sync minja

* Update minja.hpp

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-08-08 10:45:18 +01:00
Johannes Gäßler 1425f587a8
CUDA: attention sinks for mma FlashAttention (#15157) 2025-08-08 08:19:58 +02:00
lhez aaa3d07ae7
opencl: support sink in `soft_max` (attn sinks) (#15152) 2025-08-07 21:47:03 -07:00
Xuan-Son Nguyen 50aa938901
convert : support non-mxfp4 HF model (#15153)
* convert : support non-mxfp4 HF model

* rm redundant check

* disable debug check
2025-08-07 23:26:03 +02:00
Jeff Bolz c4f53563df
vulkan: support fattn sinks (#15126) 2025-08-07 22:44:20 +02:00
Jeff Bolz a0552c8bee
vulkan: Add env var to disable host visible vidmem (#15109) 2025-08-07 22:07:11 +02:00
RunningLeon 99acbc9921
llama : Support intern-s1 (#14875)
* support internvl

* support interns1

* resolve comments

* put interns1 in tensor mapping

* resolve comment

* move tokenizer changes to sub class
2025-08-07 18:20:40 +02:00
uvos 7ad67ba9fe
HIP: add cmake option to enable compiler output of kernel resource usage metrics (#15103) 2025-08-07 16:44:14 +02:00
Christian Kastner 9a96389544
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094)
Any available libraries are found and loaded dynamically at runtime.
2025-08-07 13:45:41 +02:00
Johannes Gäßler 1d72c84188
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131)
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
2025-08-07 10:53:21 +02:00
Johannes Gäßler 20638e4f16
scripts: fix crash when --tool is not set (#15133) 2025-08-07 08:50:30 +02:00
Daniel Bevenius 36d3f00e14
requirements : fix PyTorch uint64 compatibility (#15134)
This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```

This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734).

PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.

Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#68938de803e47d990aa087fb
Refs: https://github.com/pytorch/pytorch/issues/58734
2025-08-07 05:31:48 +02:00
Reese Levine 5fd160bbd9
ggml: Add basic SET_ROWS support in WebGPU (#15137)
* Begin work on set_rows

* Work on set rows

* Add error buffers for reporting unsupported SET_ROWS indices

* Remove extra comments
2025-08-06 15:14:40 -07:00
rmatif 756cfea826
fix profiling crash (#15072) 2025-08-06 14:17:51 -07:00
lhez e725a1a982
opencl: add `swiglu_oai` and `add_id` (#15121)
* opencl: add `swiglu-oai`

* opencl: add `add_id`

* opencl: add missing `add_id.cl`
2025-08-06 12:12:17 -07:00
Sachin Desai 3db4da56a5
chat : support Granite model reasoning and tool call (#14864) 2025-08-06 20:27:30 +02:00
Juk Armstrong 476aa3fd57
Fixed name `-override-tensors` to `-override-tensor` (#15129) 2025-08-06 17:28:48 +01:00
Diego Devesa 0d8831543c
ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
Sigbjørn Skjæret 65c797c4fa
chat : fix yandex chat template (#15116) 2025-08-06 13:26:49 +02:00
stevenkuang 25726898e8
chat : fix hunyuan auto-detection (#15114)
Signed-off-by: stevenkuang <stevenkuang@tencent.com>
2025-08-06 11:48:30 +02:00