Functional.conv2d without gradients

I’m using torch.nn.functional.conv2d to convolve an input with a custom non learning kernel, as follows:

input = torch.randn([1,3,300,300], requires_grad=False)
output = torch.nn.functional.conv2d(input, weight=torch.randn([6,3,5,5]))

What steps do I need to take to prevent gradient from being calculated as I’m trying to save memory.

I’ve declared my input tensor with requires_grad=false.
Is this enough or was this unnecessary?

Thanks,
Barak.

That should be enough, since output won’t have a valid .grad_fn, which indicates that no computation graph was created.
If you try to call backward() on output, you would get the typical error:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

You could also wrap this code snippet in another with torch.no_grad() block just to make sure that no gradient are calculated even if you are using a weight parameter, which might require gradients.

1 Like