So I have a model where I have an embedding layer (nn.Embedding) and a final nn.Linear projection layer that are sharing weights via weight tying.
It seems like the best practice is not to perform weight decay on embedding weights, but to perform decay on linear layer weights. What should I do in this situation?
Here are the pages I have checked unsuccessfully in search of an answer.
- Weight decay in the optimizers is a bad idea (especially with BatchNorm)
- Weight decay exclusions by michaellavelle · Pull Request #24 · karpathy/minGPT · GitHub
- https://github.com/karpathy/minGPT/blob/3ed14b2cec0dfdad3f4b2831f2b4a86d11aef150/mingpt/model.py#L136
- regularization - Why not perform weight decay on layernorm/embedding? - Cross Validated
- https://github.com/pytorch/examples/blob/main/word_language_model/model.py#L28
- python - Tying weights in neural machine translation - Stack Overflow
- Weight decay only for weights of nn.Linear and nn.Conv*