Adding cuda gives error CUDNN_STATUS_BAD_PARAM

I have the following code.

import torch
from torch.autograd import Variable
import torch.nn as nn

def conv3d(in_channels, out_channels, kernel_size = 4, stride = 2, padding = 1):
    return nn.Conv3d(in_channels, out_channels, kernel_size = kernel_size, stride = stride, padding = padding, bias = True)

class G_encode(nn.Module):
    def __init__(self):
        super(G_encode, self).__init__()
        self.model = nn.Sequential(
    def forward(self,x):
        print('G_encode Input =', x.size())
        out = self.model(x)
        print('G_encode Output =', out.size())
        return out

x = Variable(torch.rand([1,3,1,64,64])).cuda()
model = G_encode().cuda()
out = model(x)

This code seems to be working fine when I remove .cuda(). However, it is giving RuntimeError: CUDNN_STATUS_BAD_PARAM with .cuda(). My cuda version is 8.0.61, and the nvidia driver version is 384.111.

I can reproduce it. The cause is that your kernel_size is larger than size + padding * 2. I will submit a fix to improve the error message.


I have a similar issue…can’t seem to fix it. I have an input of dimension (batch_size=64, channels=1, x=80, y=10) and kernel size (17,1) with no padding. I want the size in the x-dimension to reduce after every conv2d operation and no change in the y dimension.

later I use convtranspose2d where it throws this error.