L1 penalty on the activations of a layer

To impose a L1 penalty on the activations of a layer I have the following code:

upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
                                kernel_size=4, stride=2,
                                padding=1, bias=use_bias)

activations_to_regularise = upconv(input)
output = remaining_netowrk(activations_to_regularise)
total_loss = criterion(output, target) + 0.01 *  activations_to_regularise.abs()

down = [downrelu, downconv]
up = [uprelu, upconv, upnorm]
model = down + up

But I found the following error:

File “/homeLocal/hugo/seg/pix2pix_torch/pytorch_pix2pix/models/networks.py”, line 403, in init
activations_to_regularise = upconv(input)
File “/homeLocal/hugo/seg/virtualenv/seg/local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/homeLocal/hugo/seg/virtualenv/seg/local/lib/python2.7/site-packages/torch/nn/modules/conv.py”, line 566, in forward
output_padding, self.groups, self.dilation)
File “/homeLocal/hugo/seg/virtualenv/seg/local/lib/python2.7/site-packages/torch/nn/functional.py”, line 169, in conv_transpose2d
if input is not None and input.dim() != 4:
AttributeError: ‘builtin_function_or_method’ object has no attribute ‘dim’

I think the problem is at the moment to get the activations of my layer, but I do not know how to solve this. Can someone help me?

Are you sure that the input Variable is defined when you call this line activations_to_regularise = upconv(input) ?