Error comes: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same

I checked in forward function using print(self.device). GPU id is cuda:0 and input tensor is also device=‘cuda:0’

Following define in “def init(self, opt):” function of class
self.r1= nn.Conv2d(6, 20, kernel_size=3,stride=1,padding=1)

Below calling from " def forward(self):" function of class which gives error.

Looking after many topic on forum i come to know that model should be run on GPU. But it is already runs on GPU as i print it in forward function.

Hey can you do one more thing print any layer weights in your model before passing the input.

Eg: print(net.conv1.weight) . It will print your weights but at the end you will also see device='cuda:0', requires_grad=True) which means device weights are in GPU cuda.

I print the weight it shows following.

[ 0.0092, 0.1345, -0.0752]]]], requires_grad=True).

No cuda:0 but when i print(model.device) before that it showing cuda:0.

If we assign GPU to model, can’t layer weight will also assign
the GPU. where is the conflict?

can you do'cuda') before sending the inputs

I tried but it shows the error " ‘model’ object has no attribute ‘to’

can you paste your model here and the way you send it to the GPU

I think i have made silly mistake and not assign the GPU. Let me explain the flow.

NOTE: “model1” is defined as

from abc import ABC
class model1(ABC):

I am modifying existing code. I try to explain it in simple manner.

I create model in by taking instance of class “model1”. “model1” doesn’t contain CNN. From “model1” i am passing image to CNN which return float tensor with cuda. Now i want to do post processing in “model1”. So want to do convolution as post processing with the image returned by CNN. Thats the line where the issue comes.

In i created instance of “model1” and tried to assign cuda by writing

(1) model.cuda() it shows AttributeError: ‘model’ object has no attribute ‘cuda’
(2)‘cuda’) it shows " ‘model’ object has no attribute ‘to’

How can i check model is running on cpu or gpu? Writing “print(self.device)” in forward function of “module1” is my mistake i think so.

unfortunately one these below needs to be successful to shift the model to GPU otherwise it’s just a CPU based model. If u can post sample code for others to debug that would help

(1) model.cuda() 

I transformed the post processing in CNN class. Thanks for your prompt reply. It helps in many ways.