On macOS, AVX-512 context save is lazy: XCR0 bits 5-7 are not set until
the process first executes an AVX-512 instruction, even on fully capable
hardware. os_saves_zmm() was reading XCR0 directly and returning false on
Darwin, causing ggml to fall back to AVX2 on machines that fully support
AVX-512.
Fix: on __APPLE__ builds, os_saves_zmm() queries sysctlbyname with
hw.optional.avx512f instead of reading XCR0 bits 5-7. This reflects true
hardware capability regardless of the lazy-enable state. All other platforms
continue to use the XCR0 path unchanged.
os_saves_ymm() is unchanged on Darwin because macOS always eagerly enables
YMM state save when OSXSAVE is set. os_saves_amx() is unchanged because
Intel AMX hardware does not exist on macOS x86 machines and os_saves_zmm()
already returns false on non-AVX512 Darwin systems.
Approach confirmed by google/cpu_features impl_x86_macos.c which uses the
same hw.optional.avx512f sysctl for Darwin AVX-512 detection.
The ggml_backend_cpu_x86_score() function contained a FIXME comment
admitting it did not check for OS support. All AVX/AVX2/AVX-512/AMX
feature checks were raw CPUID bit reads with no OSXSAVE or XGETBV
validation.
On CPUs where CPUID reports AVX2 support but the OS has not enabled
YMM register save/restore (e.g. certain hypervisors, containers with
restricted XSAVE, or Windows builds with disabled AVX context), ggml
would select an AVX2/AVX-512 backend that immediately faults with
SIGILL on the first vector instruction.
Fix: add xgetbv() (MSVC _xgetbv / GCC-Clang inline asm), then three
predicates that are now called by the affected feature methods:
os_saves_ymm() — CPUID.1:ECX[27] (OSXSAVE) + XCR0[2:1] == 0b11
gates: AVX, AVX2, FMA, F16C, AVX_VNNI
os_saves_zmm() — os_saves_ymm() + XCR0[7:5] == 0b111
gates: AVX512F/DQ/PF/ER/CD/BW/VL, AVX512_VBMI/VNNI/FP16/BF16
os_saves_amx() — os_saves_zmm() + XCR0[18:17] == 0b11
gates: AMX_TILE, AMX_INT8, AMX_FP16, AMX_BF16
No interface changes — all call sites continue to use is.AVX2() etc.
The fix is self-contained within cpuid_x86.
Resolves the FIXME in ggml_backend_cpu_x86_score().
Designed and implemented by Matthew Busel.
* K quant speedup (#20)
* Basic JIT compilation for mul_mat, get_rows, and scale (#17)
* scale jit working
* preliminary working jit for getrows and mulmat, needs refining
* simplified mul_mat preprocessing switch statement
* get_rows fixes, mul_mat refinement
* formatted + last edits
* removed some extraneous prints
* fixed get_rows, fixed workgroup dispatch in mul_mat. no gibberish
* small fix
* some changes, working
* get_rows and mul_mat jit fixed and working
* Update formatting
* formatting
* Add header
---------
Co-authored-by: Neha Abbas <nehaabbas@ReeseLevines-MacBook-Pro.local>
Co-authored-by: Reese Levine <reeselevine1@gmail.com>
* Start work on all-encompassing shader library
* refactor argmax, set_rows
* Refactor all but flashattention, mat mul
* no gibberish, all k quants added, merged
* vec memory fix
* q6_k matching metal on my machine, tests passing
* Set tile size for q6_k separately
* Separate out fast shaders
---------
Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>
* Move towards writeBuffer for params
* Move away from multiple buffers for set_rows errors, remove host buffer for parameter buffers, minor cleanups
* Remove extra file
* Formatting
---------
Co-authored-by: neha-ha <137219201+neha-ha@users.noreply.github.com>
Replace GGML_ABORT("fatal error") in ggml_metal_synchronize() with
error flag + return. This aligns synchronize error handling with
graph_compute, which already returns GGML_STATUS_FAILED for the same
condition.
When a command buffer fails (e.g., iOS GPU access revocation during
backgrounding, macOS eGPU disconnect, OOM), the backend enters an
error state instead of killing the host process. Subsequent
graph_compute calls return GGML_STATUS_FAILED immediately. Recovery
requires recreating the backend.
Failed extra command buffers are properly released on the error path
to avoid Metal object leaks.
Enable mul_mv_ext small-batch kernels (BS 2-8) for BF16, Q2_K,
and Q3_K quantization types. These types previously fell through
to the slower single-row mul_mv path.
BF16 uses the float4 dequantize path (like F16). Q2_K and Q3_K
use the float4x4 K-quant path (like Q4_K/Q5_K/Q6_K).
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* ggml-Vulkan: add ELU support
* ggml-Vulkan: remove extra spaces and variables
* ggml-Vulkan: fix format issue
* ggml-Vulkan: fix format issue
* fix whitespace issue
* Update Vulkan.csv and ops.md
* vulkan: Fix data races in coopmat1 mul_mat(_id)
Add barriers between coopmat store and regular loads. We sort of got away with
this because it was the same subgroup accessing the values, but it's still a
race and may not work.
* switch to subgroup control barriers
* ggml-cuda: add mem check for fusion
* Replace NaNs with -FLT_MAX
* fix typo
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
This patch addresses an Internal Compiler Error (Segmentation fault)
observed with gcc 15 by replacing the intrinsic + cast by doing
a cat on the data first and then calling the intrinsic. This bypasses the
buggy compiler path while maintaining identical instruction selection.
Performance Verification:
Assembly analysis on RHEL 9 (GCC 15.1.1) confirms that both the original
code and this fix generate the identical Power10 prefixed load instruction:
`plxv 40, 2(14)`
This ensures zero performance regression while unblocking builds on
newer toolchains.
Reproduced on:
- Alpine Linux + GCC 15.2.0-r2
- RHEL 9 + GCC 15.1.1 (gcc-toolset-15)
Signed-off-by: Shalini Salomi Bodapati <Shalini.Salomi.Bodapati@ibm.com>
* CUDA: use shared mem for ssm_conv
* fuse silu + ssm_conv
* fuse unary + mul
* enable for fp16
* formatting
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* hexagon: add fp16 support for binary ops: add,sub,mul,div
* hexagon: fix test-backend-ops failures for fp16 binary ops on older arches (<v79)
* hexagon: decide on n_threads (aka n_jobs) early to avoid overallocating scratchpad
* snapdragon: fix readme link
---------
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
* Adds CPU-to-CUDA copy capability to
ggml_backend_cuda_cpy_tensor_async()
* Adds function to relax sync requirements between input copies on
supported backends (CUDA for now)
* Exchanges synchronous copy with async copy function.
* Adds macro guards to allow compilation in non-CUDA builds
* Reworked backend detection in ggml-backend.cpp to avoid linking
conflicts
* Relax requirement of checks in async CUDA copies from backend and buffer type to just buffer type, to avoid linking issues
* Minor cleanup
* Makes opt-in to relax use of explicit syncs more general. Backends like
vulkan which require a synchronization between HtoD copies and graph
execution could also adopt this change now.
* Reintroduces stricter check for CPU->CUDA backend async copy via
GGML_DEVICE_TYPE_CPU.
* Corrects initialization of ggml_backend_sync_mode in
ggml_backend_sched_split initialization
* Simplifies synchronizations to adhere to `saaasg` pattern.
* Apply suggestion from @ggerganov (src->buffer to buf_src)
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Apply suggestion from @ggerganov (src->buffer to buf_src) v2
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Enable tmate debugging for investigating thread safety issue
* Refactor wait and submit to operate on vector<wgpu::FutureWaitInfo>, and fix wait to delete only the future that is completed.
* Cleanup
* Remove clear change and run clang-format
* Cleanup
* ggml-webgpu: fix workgroup dispatch limit for large batch sizes
WebGPU limits workgroup sizes to 65535 per dimension. Large MUL_MAT
operations with batch sizes exceedeing this limi would fail.
* add compute_2d_workgroups() helper to split total workgroup ID across
X/Y dimensions
* update mul_mat_reg_tile.wgsl to reconstruct linear workgroup ID from 2D
dispatch
* update mul_mat_subgroup_matrix.wgsl to reconstruct linear workgroup ID
from 2D dispatch
* update mul_mat.wgsl to compute global index from 2D workgroup
coordinates
* refactor all three mul_mat dispatch paths to use the shared helper
* ggml-webgpu: add bounds checking for over-dispatched workgroups
2D workgroup dispatch can over-dispatch when total workgroups don't
divide evenly into the 65535 per-dimension limit. Extra workgroups
would compute invalid batch indices, causing memory corruption.
* add batch_idx bound check to mul_mat_reg_tile.wgsl and
mul_mat_subgroup_matrix.wgsl to prevent over-dispatched workgroups
from accessing invalid memory
* fixes test failures with large batch sizes (eg., bs=[128, 1024])
* ggml-webgpu: add back TODO for spliting large sizes into batches
* Optimize 2d workgroup provisioning
* Set some parameters that increase speed
---------
Co-authored-by: Reese Levine <reeselevine1@gmail.com>
* Allow webgpu_buf_pool to resize if needed, remove inflight_threads, and replace inflight_threads with num_kernels for submission
* Run clang-format
* Keep track of num batched kernels that have not been submitted yet
* Run clang-format
* Increase buf pool max size
* Increase param buf pool init size
* Remove webgpu buf pool resizing
* Merge with master
* Add buffer pool growth
* Move buffer pool growth outside of lock
* Reduce max pool size to 32
* Run clang-format
* Only resize param buf pool
* ggml-webgpu: Add binary op support for overlapping and non-contiguous.
* Add newline to binary.wgsl
* Append the test of binary op for src overlapping to test_bin_bcast.
* Remove unnecessary newline.
* vulkan: fix and enable cpy_tensor_async function
* use transfer_queue for async transfers on AMD, synchronize with timeline semaphore
* update offload_op logic
* fix missing transfer submission
* disable async transfer queue on AMD GCN
* revert op batch size change
* fix cpy_tensor_async checks