dragen
(Jackie)
May 24, 2018, 12:32pm
1
For a self-defined nn.Module, we can easily convert between cpu and cuda by means of .to(device)
if our variable is defined as :
class MyModule(nn.Module):
def __init__():
var1 = nn.Parameters()
var2 = nn.register_buffer()
However, sometimes, it’s can NOT work if in run-time code, such as :
class MyModule(nn.Module):
def __init__(n):
for i in range(n):
var1 = nn.Parameters()
var2 = nn.register_buffer()
OR:
class MyModule(nn.Module):
def get(self):
var1 = nn.Parameters()
var2 = nn.register_buffer()
return var1, var2
In the above 2 cases, the var1
and var2
can NOT be converted automatically.
How to elegantly solve the problems?
You could use nn.ParameterList
:
class MyModule(nn.Module):
def __init__(self, n):
super(MyModule, self).__init__()
self.params = nn.ParameterList([nn.Parameter(torch.randn(1)) for i in range(n)])
def forward(self, x):
# your forward pass
return x
model = MyModule(5)
list(model.parameters())
1 Like
dragen
(Jackie)
May 25, 2018, 1:45am
3
@ptrblck thx.
How to return a automatically converted tensor for :
class MyModule(nn.Module):
def get(self):
var1 = nn.Parameters()
var2 = nn.register_buffer()
return var1, var2
when the module MyModule
run on cpu or gpu.
You could try to use the device
property of another parameter:
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.fc1 = nn.Linear(1, 1)
def forward(self, x):
return x
def get(self):
device = self.fc1.weight.device
var1 = nn.Parameter(torch.randn(1, device=device))
return var1
model = MyModule()
v1 = model.get()
print(v1.device)
> cpu
model = model.to('cuda')
v2 = model.get()
print(v2.device)
> cuda:0
If you don’t have any Parameter
s registered in your Module
, you could pass the device as an argument alternatively.
3 Likes
dragen
(Jackie)
May 29, 2018, 4:19am
5
oh, it’s not elegant but solve my problem.
Thanks.