nn.ParameterList for non-required_grad tensor

We can add required grad variable into class by

class MyModule(nn.Module):
    def __init__(self):
        super(MyModule, self).__init__()
        self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)])

BUT for non-required grad class tensor, such as

class MyModule(nn.Module):
    def __init__(self):
        super(MyModule, self).__init__()
        self.params = [torch.randn(10, 10) for i in range(10)]
        for i, tensor in self.params:
             self.register_buffer("tensors:%d"%i, tensor)


althouth we can use self.register_buffer(name, tensor) for all the tensor in self.params, it still can not automatically convert between cpu and cuda by calling MyModule().to(device).
Is there any method like nn.ParameterList that can convert tensor automatically in class?

Your code seems to work fine:

class MyModule(nn.Module):
    def __init__(self):
        super(MyModule, self).__init__()
        self.params = [torch.randn(10, 10) for i in range(10)]
        for i, tensor in enumerate(self.params):
             self.register_buffer("tensors%d"%i, tensor)

model = MyModule()
print(model.tensors0.type())
> torch.FloatTensor

model = model.to('cuda')
print(model.tensors0.type())
> torch.cuda.FloatTensor

Is this not working in your case?

Hi, it’s working by accessing it using name.
But I want to acces it using self.params . In that case, self.params just keep the reference of cpu variable, NOT the cuda reference.
In contrast, by using self.params=nn.ParameterList([]), we can access all cuda variable by self.params.
I would like some API that could do the same function for buffer tensors like nn.ParameterList for Variable, So do u understand my intention?

1 Like