Suppose I have this model:
class DummYmodel(nn.Module):
def __init__(self, x, m):
super().__init__()
self.x = torch.nn.Parameter(torch.tensor(x))
self.m = torch.tensor(m)
def forward(self, x):
pass
I will init and put it on Cuda
dummy = DummYmodel(2.0, 3).cuda()
>dummy.x
Parameter containing:
tensor(2., device='cuda:0', requires_grad=True)
>dummy.m
tensor(3)
dummy.m
is not on cuda
. Is there a way to make sure when I transfer to Cuda
everything inside the class get transferred to Cuda ?