[Error] Maxout Layer

I am trying to implement maxout in pytorch and running into error. After the convolution of a layer I want to implement maxout.Here is the code.
class CNN(nn.Module):
def init(self):
super(CNN, self).init()
self.layer1 = nn.Conv2d(3, 4, kernel_size=5, padding=2)

def forward(self,x):
    out=self.maxout(self.layer1,x)
    
    return out

def maxout(self, layer, data):
    x = layer(data)
    kernels = x.data.shape[1]  # to get how many kernels/output
    feature_maps = int(kernels / 4)
    out_shape = (x.data.shape[0], feature_maps, 4, x.data.shape[2], x.data.shape[3])
    print(out_shape)
    x.data=x.data.view(out_shape)
    print(x.data.shape) 
    #x.data=torch.max(x.data[:, :, :], 2)[0]
    return Variable(torch.max(x.data[:, :, :], 2)[0])

In my case, I am trying to apply maxout on every four feature maps coming out after convolution. Running the code gives me the following error as I am returning a new tensor:RuntimeError: there are no graph nodes that require computing gradients
And if I run the commented line and return x it gives me following error:RuntimeError: Need gradOutput of dimension 4 and gradOutput.size[1] == 4 but got gradOutput to be of shape: [5 x 1 x 1332 x 128] at /opt/conda/conda-bld/pytorch_1503970438496/work/torch/lib/THNN/generic/SpatialConvolutionMM.c:50

Any help how to solve this one.