llama.cpp/ggml/src/ggml-qnn
hongruichen 332514cd5c qnn fix: update device capabilities for quantized types in qnn-lib to improve compatibility 2025-06-23 16:04:01 +08:00
..
npu feat: flash attention support for hexagon-npu (#45) 2025-06-18 10:32:08 +08:00
qnn qnn fix: update device capabilities for quantized types in qnn-lib to improve compatibility 2025-06-23 16:04:01 +08:00
shared feat: flash attention support for hexagon-npu (#45) 2025-06-18 10:32:08 +08:00
CMakeLists.txt feat: perf opt part3 (#42) 2025-05-16 19:57:33 +08:00