Can anyone help test a snippet

I met some error on both of my machines; I don’t know if someone else can reproduce it.

import torch
x = torch.cuda.FloatTensor(1, 1, 16384, 16384)
x = torch.autograd.Variable(x, requires_grad=True)
y = x.expand(2, x.size(1), x.size(2), x.size(3))
grid = torch.rand(2, 1, 1, 2)
import torch.nn.functional as F
z = F.grid_sample(y, torch.autograd.Variable(grid.cuda()))
z.sum().backward()

I got error

Traceback (most recent call last):
File “”, line 1, in
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/variable.py”, line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/init.py”, line 98, in backward
variables, grad_variables, retain_graph)
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/function.py”, line 91, in apply
return self._forward_cls.backward(self, *args)
File “/opt/conda/lib/python2.7/site-packages/torch/autograd/function.py”, line 194, in wrapper
outputs = fn(ctx, *tensor_args)
File “/opt/conda/lib/python2.7/site-packages/torch/nn/_functions/vision.py”, line 48, in backward
grad_output)
RuntimeError: CUDNN_STATUS_BAD_PARAM

It works fine when running under cpu mode.
It works also fine when replacing last line with z.backward(z.data.clone().fill_(1))

torch version is ‘0.2.0+0cd149f’

Can someone try? At least tell me if it happens on your machine or not.

1 Like

Hi

I get the same. What do you need in terms of details?

Best regards

Thomas

This seems like a bug right.
Because it works for z.backward(z.data.clone().fill_(1)).

1 Like

Same problem for me, find this thread, when googled similar problem with grid_sample backwards