How to make all floating point numbers have the same precision and on the same device

for example

class net(nn.Module):
    def __init__(self):
        super(net, self).__init__() = nn.Sequential(
            nn.Linear(in_features=200, out_features=1, bias=True)
        self.net1 = nn.Parameter(torch.Tensor(1, 200))

    def forward(self, x):
        aa = torch.randn(1, 200)
        m = feat * aa
        return m

Although the precision and device are set, the tensor created in the forward is still of type float and cpu.
The designation must be displayed again in order to maintain type consistency through the “to(double)/to(“cuda”)”

You could use torch.randn_like and use the input or any parameter in the desired dtype or you could explicitly use e.g. torch.randn(..., dtype=self.net1.weight.dtype) (or the same with the input etc.).

Thank you very much!