Why can't I use 'Conv2d' in GPU?

Hello everybody. I am confused about a question. I’ve classed a small CONV net like this:

import torch as t
import torch.nn as nn
ts1=t.tensor([[0.0,0.0,0.0,1.0],[0.0,0.0,1.0,0.0],[0.0,1.0,0.0,0.0],[1.0,0.0,0.0,0.0]]).view(1,1,4,4)
class Net(nn.Module):
    def __init__(self):
        super(Net,self).__init__()
        self.c1=nn.Conv2d(1,1,kernel_size=(3,3))
    def forward(self,x):
        x=self.c1(x)
        return (x)
mynet=Net()
print(mynet.forward(ts1))

and it can print this answer:

tensor([[[[-0.1707, -0.4558],
          [-0.4558,  0.1861]]]], grad_fn=<ThnnConv2DBackward>)

but if I put this net into GPU like this:

if t.cuda.is_available():
    mynet.cuda()
    ts1.cuda()
print(mynet.forward(ts1))

it does not work and shows:

Traceback (most recent call last):
  File "F:/learning/new.py", line 14, in <module>
    print(mynet.forward(ts1))
  File "F:/learning/new.py", line 9, in forward
    x=self.c1(x)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py", line 301, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 'weight'

I don’t understand. Why does self.c1 expectes object of type torch.FloatTensor after I put it into my GPU?

While you can call mynet.cuda() in place, you have to re-assign tensors as ts1 = ts1.cuda().

As a small side note: you should call the model directly with your input instead of using forward:

output = mynet(ts1)

, as this will make sure all hooks are properly set.