llama.cpp/ggml/include
graehl bc39aa67f9 examples/finetune -opt SGD (stochastic gradient descent) memory opt
add unit tested GGML_OPT_OPTIMIZER_SGD to ggml - avoids allocating
m, v tensors.

support finetune.cpp arg -opt SGD (or sgd). (default adamw as before)

llama 3.2-1b-F32 result: observed 11gb gpu ram (41 sec/epoch)
when using SGD instead of 19gb (55 sec/epoch) using adamw.
(wikipedia 100 lines finetune)

(
using the same GPU memory, adamw can only do before OOM 512
batch/context, reaching:
train: [███████▉] data=0000140/0000140 loss=0.02575±0.00099 acc=99.52±0.03% t=00:00:47 ETA=00:00:00
val:   [███████▉] data=0000008/0000008 loss=4.76565±0.28810 acc=41.46±0.77% t=00:00:00 ETA=00:00:00

SGD is superior, though it converges slower, with max before OOM 1728
batch/context (esp see the better validation perf):
train: [███████▉] data=0000039/0000039 loss=0.00371±0.00010 acc=99.96±0.01% t=00:00:41 ETA=00:00:00
val:   [███████▉] data=0000003/0000003 loss=5.11406±0.76034 acc=48.01±0.69% t=00:00:01 ETA=00:00:00
)

note: when finetuning long enough (or w/ enough -lr),
validation accuracy *eventually* drops ('catastrophic forgetting')

-lr-half (halflife) option useful for SGD to avoid oscillation or
super slow underdamped learning (makes setting -lr more forgiving).
terminal -lr for now is set by lr-halvings i.e. if you want at most
1/8 the inital -lr you set -lr-halvings 3.

note: objective loss not directly comparable between adamw, sgd? -
check perplexity or accuracy or consider relative improvements
for convergence

new finetune args -wd 1e-9 to enable weight decay in sgd or adamw,
and max -epochs N (default 2 as before)

cache (1 - wd*alpha) in 'adamw' opt struct -
no noticeable perf benefit, disabled (still done
for new SGD though)

since opt. memory is pre-allocated, the ggml_opt_get_optimizer_params
would probably be able to change between SGD and AdamW with each epoch
but would need to use adamw for the first (unconfirmed - no cmdline arg
to set such a policy yet)

test-opt checks adamw as before and now sgd (except for a few disabled
tests for sgd only; probably just needs logging values and adding
alternate reference values);  tolerance on the 'regression'
test is broader for sgd (so we don't need many more epochs)
2025-07-22 11:27:05 -07:00
..
ggml-alloc.h ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend.h vulkan: Add fusion support for RMS_NORM+MUL (#14366) 2025-06-29 09:43:36 +02:00
ggml-blas.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cann.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpp.h ggml : fix ggml_gallocr_ptr type (ggml/1205) 2025-05-01 09:58:44 +03:00
ggml-cpu.h ggml : add ggml_set_rows (#14274) 2025-06-27 16:41:40 +03:00
ggml-cuda.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-metal.h repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
ggml-opencl.h Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (#10693) 2024-12-13 12:23:52 -08:00
ggml-opt.h examples/finetune -opt SGD (stochastic gradient descent) memory opt 2025-07-22 11:27:05 -07:00
ggml-rpc.h rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (#12943) 2025-04-25 10:08:08 +03:00
ggml-sycl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-vulkan.h vulkan: Make Vulkan optional at runtime (#11493). (#11494) 2025-02-10 07:17:21 +01:00
ggml-webgpu.h ggml: Add initial WebGPU backend (#14521) 2025-07-16 18:18:51 +03:00
ggml.h examples/finetune -opt SGD (stochastic gradient descent) memory opt 2025-07-22 11:27:05 -07:00
gguf.h GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00