..
bindings
Abort if args are unrecognized, refactor argument passing
2025-12-15 03:18:45 -08:00
evals
Add MMLU eval to github
2024-05-20 10:20:53 -07:00
activations.h
Rollback of erroneous rollback.
2026-03-02 06:50:26 -08:00
api_client.cc
Abort if args are unrecognized, refactor argument passing
2025-12-15 03:18:45 -08:00
api_server.cc
Abort if args are unrecognized, refactor argument passing
2025-12-15 03:18:45 -08:00
attention.cc
Fixed msan error by fixing padding of k_cache and v_cache
2026-03-06 08:11:17 -08:00
attention.h
Fixed msan error by fixing padding of k_cache and v_cache
2026-03-06 08:11:17 -08:00
attention_test.cc
Rollback of erroneous rollback.
2026-03-02 06:50:26 -08:00
configs.cc
Use a struct to manage the mapping between `AttentionImpl` enum values and their string names, simplifying `GetAttentionImplName` function. Add a test to ensure all valid `AttentionImpl` enums have a corresponding name and can be looked up.
2026-02-27 01:31:11 -08:00
configs.h
Fixed msan error by fixing padding of k_cache and v_cache
2026-03-06 08:11:17 -08:00
configs_test.cc
Use a struct to manage the mapping between `AttentionImpl` enum values and their string names, simplifying `GetAttentionImplName` function. Add a test to ensure all valid `AttentionImpl` enums have a corresponding name and can be looked up.
2026-02-27 01:31:11 -08:00
flash_attention.cc
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
flash_attention.h
Rollback of erroneous rollback.
2026-03-02 06:50:26 -08:00
flash_attention_test.cc
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
flash_structs.h
Fixed msan error by fixing padding of k_cache and v_cache
2026-03-06 08:11:17 -08:00
gemma-inl.h
Add tensor stats and output
2025-12-11 22:52:46 -08:00
gemma.cc
Improve instrumentation for ViT parts
2026-02-25 13:10:44 -08:00
gemma.h
Abort if args are unrecognized, refactor argument passing
2025-12-15 03:18:45 -08:00
gemma_args.h
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
gemma_args_test.cc
Abort if args are unrecognized, refactor argument passing
2025-12-15 03:18:45 -08:00
kv_cache.cc
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
kv_cache.h
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
kv_cache_test.cc
Internal changes
2026-01-09 06:35:36 -08:00
model_store.cc
Add ability to load custom models which are fully described by the ModelConfig blob.
2026-03-02 01:18:33 -08:00
model_store.h
Allow overriding hardcoded max_seq_len by cmdline argument seq_len.
2026-01-08 04:28:59 -08:00
query.h
Warning fixes (sign mismatch), switch default
2025-12-15 02:41:19 -08:00
run.cc
Fix VLM prefill batch size - prompt+tokens
2026-03-05 11:21:55 -08:00
tensor_info.cc
Add tensor stats and output
2025-12-11 22:52:46 -08:00
tensor_info.h
Add tensor stats and output
2025-12-11 22:52:46 -08:00
tensor_info_test.cc
Minor: ModelWeightsPtrs -> WeightsPtrs
2025-07-11 06:11:51 -07:00
tensor_stats.cc
Add int8 quantization stats
2025-12-19 12:43:03 -08:00
tensor_stats.h
Add int8 quantization stats
2025-12-19 12:43:03 -08:00
tiled_attention.cc
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
tiled_attention.h
Implementation of tiled attention with bf16 and circular buffers which reduces memory requirements by 4x on longer context on gemma models.
2026-02-24 03:26:49 -08:00
tiled_attention_test.cc
Int8 + microscaling support for kv cache formats.
2026-03-09 02:50:08 -07:00
tokenizer.cc
(Resubmit) Prepare profiler annotations for new API
2025-08-13 01:38:24 -07:00
tokenizer.h
6x large-batch, short-prompt prefill speedup
2025-06-10 09:56:20 -07:00
vit.cc
Improve instrumentation for ViT parts
2026-02-25 13:10:44 -08:00
vit.h
Minor: ModelWeightsPtrs -> WeightsPtrs
2025-07-11 06:11:51 -07:00
weights.cc
Minor: ParallelismStrategy->Parallelism
2025-11-06 06:56:10 -08:00
weights.h
Add tensor stats and output
2025-12-11 22:52:46 -08:00