Transferring everything inside the nn.Module class to Cuda

Suppose I have this model:

class DummYmodel(nn.Module):
    def __init__(self, x, m):
        super().__init__()
        self.x = torch.nn.Parameter(torch.tensor(x))
        self.m = torch.tensor(m)
        
    def forward(self, x):
        pass

I will init and put it on Cuda

dummy = DummYmodel(2.0, 3).cuda()

>dummy.x

Parameter containing:
tensor(2., device='cuda:0', requires_grad=True)

>dummy.m

tensor(3)

dummy.m is not on cuda. Is there a way to make sure when I transfer to Cuda everything inside the class get transferred to Cuda ?

All nn.Parameters, nn.Modules and buffers will be transferred to the specified device.
To push self.m to the device, you should register it as a buffer via:

self.register_buffer('m', torch.tensor(m))