While I’m trying to use the instance_norm function in the functional class with my own weight and bias, it raises some errors like below,
Traceback (most recent call last):
File “learn_train.py”, line 320, in
pred_v = model([inputParaTensor_v, inputTensor_v])
File “/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/root/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 71, in forward
return self.module(*inputs[0], **kwargs[0])
File “/root/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “learn_train.py”, line 90, in forward
conv1_feat = self.relu(F.layer_norm(conv1_feat_norm))
AttributeError: module ‘torch.nn.functional’ has no attribute ‘layer_norm’
What is the version of pytorch you’re using?
If you use the latest binary release which is 0.3.1, this function is not in the doc.
This function has been added in master and is only available if you compile from source at the moment.
I think torch removed the interpolate layer inside the nn.function and created the equivalent methods for Upsampling and other in different modes.
It’s opposite of what you said.
I checked my PyTorch version. It is 4.0.0 py36_cuda8.0.61_cudnn7.1.2_1 . So, it was my mistake, because as per the doc, UpSample() is deprecated in version > 4.0.0