Torch.no_grad and "required gradients"

I am receiving the following error:

RuntimeError: Only Tensors of floating point and complex dtype can require gradients.

when i run a convolution operation inside a torch.no_grad()context:

import torch.nn as nn
import torch


with torch.no_grad():
    ker_t = torch.tensor([[1, -1] ,[-1, 1]])
    in_t = torch.tensor([ [14, 7, 6, 2,] , [4 ,8 ,11 ,1], [3, 5, 9 ,10], [12, 15, 16, 13] ])
    print(in_t.shape)
    in_t = torch.unsqueeze(in_t,0)
    in_t = torch.unsqueeze(in_t,0)
    print(in_t.shape)

    conv = nn.Conv2d(1, 1, kernel_size=2,stride=2,dtype=torch.long)
    conv.weight[:] = ker_t
    conv(in_t)

Now i am sure if i turn my input into floats this message will go away, but i want to work in integers.

But i was under the impression that if i am in a “with torch.no_grad()” context it should turn of the need for gradients.

The error message might be a bit confusing, but since nn.Parameters can require gradients outside of the no_grad() context manager, you won’t be able to assign LongTensors to them.