Here’s a minimal example (never mind that it looks strange):
import torch.nn as nn
x = Variable( t.rand(10000000,1)).cuda()
bn = nn.BatchNorm1d(1)
bn.cuda()
xbn = bn(x)
I get the stack-trace below. I recall from some other thread that I would need to build PyTorch from branch R4 to get rid of this? Is that still the case? I’m using PyTorch built from master a month ago on the AWS Deep Learning Ubuntu AMI instance, and I get this error on that instance.
Thanks !
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-25-6584c2ec408e> in <module>()
6 bn = nn.BatchNorm1d(1)
7 bn.cuda()
----> 8 x1cbn = bn(x)
9 x1cbn.size()
/home/ubuntu/src/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
200
201 def __call__(self, *input, **kwargs):
--> 202 result = self.forward(*input, **kwargs)
203 for hook in self._forward_hooks.values():
204 hook_result = hook(self, input, result)
/home/ubuntu/src/anaconda2/lib/python2.7/site-packages/torch/nn/modules/batchnorm.pyc in forward(self, input)
41 return F.batch_norm(
42 input, self.running_mean, self.running_var, self.weight, self.bias,
---> 43 self.training, self.momentum, self.eps)
44
45 def __repr__(self):
/home/ubuntu/src/anaconda2/lib/python2.7/site-packages/torch/nn/functional.pyc in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
387 training=False, momentum=0.1, eps=1e-5):
388 f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps)
--> 389 return f(input, weight, bias)
390
391
RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.