In the following post, we will briefly cover the different optimizers to understand what fastai uses to achieve such a good performance in a very small numbers of epochs.
From Vanilla SGD to RAdam
During stochastic gradient descent, we will update our weights considering the learning rate and the gradient with respect our weights and the loss function as follows:
new_weight = weight - lr * weight.grad
We know from our visualizations of the the loss function, that there are many local minima, at
it is possible for our optimization algorithm to end in one of those. To avoid that we can use momentum,
which simply ensures that the direction of the gradient considers the direction of previous iterations.
weight.avg = beta * weight.avg + (1-beta) * weight.gradnew_weight = weight - lr * weight.avgIt is common to use fairly high momentum, or a beta of around 0.9. Note that fit_one_cycle in fastaiwill change the amount of momentum. The next improvement we can use is to adjust the learning ratefor each parameter, estimulating deactivated weights and pull down volatile weights as follows.w.square_avg = alpha * w.square_avg + (1-alpha) * (w.grad ** 2)new_w = w - lr * w.grad / math.sqrt(w.square_avg + eps)Note that in that way, high weights are put down (eps is added for numerical stability). This method iscalled RMSProp and if we add the "unbiased mean" it will be called Adam, which is the defaultin fastai and looks like that:w.avg = beta1 * w.avg + (1-beta1) * w.grad
unbias_avg = w.avg / (1 - (beta1**(i+1)))w.sqr_avg = beta2 * w.sqr_avg + (1-beta2) * (w.grad ** 2)new_w = w - lr * unbias_avg / sqrt(w.sqr_avg + eps)According to this paper, we need to use a progressive schedule of learning rates, if we want to makesure the initial learning rate does not affect significantly the loss after many epochs. The way to dothat, which is similarly implemented in fastai fit_one_cycle method, allow us to get fairly goodresults for different initial rate, following this warmup and cool down strategy. In the paper,this is called rectified Adam or Radam.Implementation notes
We can implement an optimized ourselves or call fit_one_cycle to leverage a fairly robustoptimization workflow. In this notebook, I show how fit_one_cycle is way better that our vanilla, fromthe scratch SGD. Note that the notebook contains experiments with all the optimizers and a briefsummary of callbacks for the usage.This is our vanilla SGD for one stepdef sgd_cb(p, lr, **kwargs): p.data.add_(-lr, p.grad.data)We add partial to fastai Optimizer classopt_func = partial(Optimizer, cbs=[sgd_cb])And we are ready to start traininglearn = get_learner(opt_func=opt_func)learn.fit(3, 0.03)We can compare this with fit_one_cyclelearn.fit_one_cycle(3, 0.03)We got ~30% with vanilla SGD and ~60% with fit_one_cycle with 3 epochs and a base learning
rate of 0.003.
Conclusion
Optimizers are important to achieve high performance in the least amount of time.To achieve that, we need to add to the vanilla SGD momentum, weight specific learningrates and a differentiated learning rates and momentum during the training cycleto ensure we made the best out of each batch. We explained what is inside offit_one_cycle (a Radam on steroids! ) so to encourage to use it instead of the standardfit or other partial optimization algorithm that will not cover everything mentionedhere. Based on my experience I encourage the development effort to be done incustomizations (Moco, Gans, Pix2Pix) rather on reinventing the learning rate policyover and over.With this I finish this series I started this year! Here is the following I willwrite about:1) What can ML/AI/Data Science do to make fashion more sustainable?2) How to bring Deep Learning models in production in the context of MLOPS?3) How to leverage Transformers and Stable Fusion using Hugging Face?That will certainly keep me busy! Stay sustainable and never forget to learn!
Comments
Post a Comment