How to convert the following batch normalization layer from Tensorflow to Pytorch?
I couldn’t find some of the following inputs in the batchnorm layer in Pytorch.
Based on the doc, let’s try to compare the arguments.
decay seems to be
1-momentum in PyTorch.
scale seem to be the affine transformations, (
affine in PyTorch).
is_training can be achieved by calling
.train() on the
I’m not sure, what
scope mean and the docs are quite confusing for me.
Your layer would therefore look like:
bn = nn.BatchNorm2d(
PS: some arguments and properties like
.train() are set by default, but I’ve added them for clarification.