Zero-Weight Decay on BatchNorm and Bias

Share

The weight decay is a regularization parameter that prevents the model weights from ‘exploding’. Zeroing the weight decay for these parameters is usually done by default in various projects and frameworks, but it’s still worth checking since it is still not the default behavior for Pytorch.

Weight decay essentially pulls the weights towards 0. While this is beneficial for convolutional and linear layer weights, Batchnorm layer parameters are meant to scale (the gamma parameter) and shift (the beta parameter) the normalized input of the layer. As such, forcing these values to a lower value would affect the distribution and result in inferior results.

Filter terms by

Related resources

deci-updates-tensorboard-blog-featured-2
Open Source
pytorch-training-sg-new-features-featured-2
Open Source
new-hardware-support-featured
Engineering