If I try to use setattr() between training runs to make some changes to a layer, the device seems to change, too, moving the parameters to the CPU. Is there a correct way to use setattr to keep the parameters on the same device?
For example:
import torch
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc1 = nn.Linear(3,3)
self.relu = nn.ReLU()
def forward(self, x):
x = self.fc1(self.relu(x))
return x
model=Model()
model.to('cuda')
for params in model.parameters():
print(params.get_device())
for layer, p in zip(model.children(), model.named_children()):
if type(layer) == nn.Linear:
setattr(model, p[0], type(layer)(2, 2))
for params in model.parameters():
print(params.get_device())
This yields:
0
0
-1
-1
(Where 0 is cuda and -1 is cpu.)