samuel
4bcc9e261e
mtp-batch(fix): Correctly advance cache head and add MTP documentation
2025-10-11 18:51:22 -03:00
samuel
b4cbe030ac
mtp-batch(chore): Fix logit flags for speculative sampling and remove debug logs
2025-10-11 18:37:40 -03:00
samuel
a99709d0c1
mtp-batch(refactor): Extract decode context and MTP input logic into helper methods
2025-10-10 17:24:34 -03:00
samuel
913af8f48d
mtp-batch(refactor): Replace MTP boolean flags with an explicit operation enum
2025-10-10 16:44:28 -03:00
samuel
6f74ba3807
mtp-batch (fix): prevent mtp draft from polluting the cache
2025-10-09 22:27:18 -03:00
samuel
5e1d719bef
mtp-batch (feat): Create and manage sinfo for MTP
2025-10-09 15:21:23 -03:00
samuel
febd8235d2
mtp-batch (wip): fix how to warmup kv cache for MTP
2025-10-05 14:43:40 -03:00
samuel
67c6c069e0
mtp-batch (wip): Isolate MTP graph to prevent host embedding buffer corruption
2025-09-27 19:42:32 -03:00
samuel
75dc25e6fe
mtp-batch (wip): organize batch for mtp cache
2025-09-27 17:17:00 -03:00
samuel
3da7e7f330
mtp-batch (fix): warm mtp cache for small batch size
2025-09-23 22:45:11 -03:00
samuel
df64508b93
mtp-batch (wip): merge glm graphs
2025-09-21 21:55:41 -03:00
samuel
042eb8a829
mtp-batch (wip): merge mtp and model graph
2025-09-21 21:29:00 -03:00
samuel
1318b2de82
mtp-batch (wip): move mtp execution to batch format
2025-09-14 10:22:59 -03:00
Aaron Lee
c6237c71ff
Merge pull request #1 from SamuelOliveirads/glm4-moe-mtp
...
feat: implemented sampling for MTP
2025-09-13 02:57:01 -04:00
samuel
8742ce0e39
feat: apply logits + greedy sampler
2025-09-06 00:21:18 -03:00
samuel
5a5bce8577
fix: add sample acceptance
2025-09-03 17:56:14 -03:00
samuel
07670a22c6
feat: implemented sampling for MTP
2025-09-03 13:25:21 -03:00
Aaron Lee
9fab53e438
fixed mtp kv cache update step in cases where prompt size > n_batch and n_ubatch
2025-09-02 17:14:09 -04:00
Aaron Lee
98bc0c6bf2
replace standard sampler with greedy sampler for mtp draft
2025-08-26 01:26:51 -04:00
Aaron Lee
471e026327
fixed vram leak
2025-08-19 23:10:56 -04:00
Aaron Lee
d72f9d5691
kludge-y kv cache management of mtp layer
2025-08-19 01:50:34 -04:00
Aaron Lee
382135aa36
fixed mtp kv cache update sequencing after prompt processing
2025-08-17 21:54:45 -04:00
Aaron Lee
6870f9790c
added proper KV cache management for MTP layers and slightly refactored
2025-08-17 04:59:36 -04:00
Aaron Lee
6e9bafc7a7
failed attempt to implement MTP; outputs tokens but KV cache management is unreasonable
2025-08-15 23:13:56 -04:00
Aaron Lee
cf0f7c0448
broad thrust of the mtp implementation
2025-08-13 02:21:17 -04:00
Aaron Lee
03231da69e
add model member function to build mtp graph, to be called from speculative.cpp
2025-08-12 01:03:59 -04:00
Aaron Lee
1f477b3755
make nextn weights loadable without a crash
2025-08-11 20:54:45 -04:00
Aaron Lee
e434f87cc7
some work towards building mtp layer graph
2025-08-11 01:21:47 -04:00
Aaron Lee
db60623e79
added getter for nextn layer count and server slot has_mtp property
2025-08-10 23:52:54 -04:00
David Zhao
79c1160b07
cuda: refactored ssm_scan and use CUB ( #13291 )
...
* cuda: refactored ssm_scan to use CUB
* fixed compilation error when when not using CUB
* assign L to constant and use size_t instead of int
* deduplicated functions
* change min blocks per mp to 1
* Use cub load and store warp transpose
* suppress clang warning
2025-08-09 20:29:43 +02:00
Aman Gupta
34c9d765bf
CUDA: add attention sinks for tile and wmma ( #15178 )
...
* CUDA: add attention sinks for tile and wmma
* Review: formatting changes + remove syncthreads from tile + remove warp_reduce_max from wmma
2025-08-09 20:00:24 +08:00
compilade
e54d41befc
gguf-py : add Numpy MXFP4 de/quantization support ( #15111 )
...
* gguf-py : add MXFP4 de/quantization support
* ggml-quants : handle zero amax for MXFP4
2025-08-08 17:48:26 -04:00
Johannes Gäßler
4850b52aed
server-bench: external OAI servers, sqlite ( #15179 )
...
* server-bench: external OAI servers, sqlite
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Update scripts/server-bench.py
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* raise_for_status
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-08-08 23:04:36 +02:00
AN Long
cd6983d56d
ggml : fix field name when new ggml_backend ( #14944 )
2025-08-08 14:37:22 +02:00
Olivier Chafik
6c7e9a5440
vendor: sync minja ( #15161 )
...
* vendor: sync minja
* Update minja.hpp
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2025-08-08 10:45:18 +01:00
Johannes Gäßler
1425f587a8
CUDA: attention sinks for mma FlashAttention ( #15157 )
2025-08-08 08:19:58 +02:00
lhez
aaa3d07ae7
opencl: support sink in `soft_max` (attn sinks) ( #15152 )
2025-08-07 21:47:03 -07:00
Xuan-Son Nguyen
50aa938901
convert : support non-mxfp4 HF model ( #15153 )
...
* convert : support non-mxfp4 HF model
* rm redundant check
* disable debug check
2025-08-07 23:26:03 +02:00
Jeff Bolz
c4f53563df
vulkan: support fattn sinks ( #15126 )
2025-08-07 22:44:20 +02:00
Jeff Bolz
a0552c8bee
vulkan: Add env var to disable host visible vidmem ( #15109 )
2025-08-07 22:07:11 +02:00
RunningLeon
99acbc9921
llama : Support intern-s1 ( #14875 )
...
* support internvl
* support interns1
* resolve comments
* put interns1 in tensor mapping
* resolve comment
* move tokenizer changes to sub class
2025-08-07 18:20:40 +02:00
uvos
7ad67ba9fe
HIP: add cmake option to enable compiler output of kernel resource usage metrics ( #15103 )
2025-08-07 16:44:14 +02:00
Christian Kastner
9a96389544
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON ( #15094 )
...
Any available libraries are found and loaded dynamically at runtime.
2025-08-07 13:45:41 +02:00
Johannes Gäßler
1d72c84188
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 ( #15131 )
...
* CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16
2025-08-07 10:53:21 +02:00
Johannes Gäßler
20638e4f16
scripts: fix crash when --tool is not set ( #15133 )
2025-08-07 08:50:30 +02:00
Daniel Bevenius
36d3f00e14
requirements : fix PyTorch uint64 compatibility ( #15134 )
...
This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```
This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734 ).
PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.
Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#68938de803e47d990aa087fb
Refs: https://github.com/pytorch/pytorch/issues/58734
2025-08-07 05:31:48 +02:00
Reese Levine
5fd160bbd9
ggml: Add basic SET_ROWS support in WebGPU ( #15137 )
...
* Begin work on set_rows
* Work on set rows
* Add error buffers for reporting unsupported SET_ROWS indices
* Remove extra comments
2025-08-06 15:14:40 -07:00
rmatif
756cfea826
fix profiling crash ( #15072 )
2025-08-06 14:17:51 -07:00
lhez
e725a1a982
opencl: add `swiglu_oai` and `add_id` ( #15121 )
...
* opencl: add `swiglu-oai`
* opencl: add `add_id`
* opencl: add missing `add_id.cl`
2025-08-06 12:12:17 -07:00
Sachin Desai
3db4da56a5
chat : support Granite model reasoning and tool call ( #14864 )
2025-08-06 20:27:30 +02:00