TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected)


What is wrong with nn.Parameter ?

l get the following error :

TypeError: cannot assign ‘torch.cuda.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected)

when l do the following :

self.weight = torch.nn.Parameter(torch.FloatTensor(7,32,32),requires_grad=True).cuda()

def forward(self,x):

Thank you

1 Like


This is because the .cuda() creates a new Tensor. you should do it inside the Tensor creation.
Note that the requires grad is already done by the nn.Parameter so no need to repeat it.

self.weight = torch.nn.Parameter( torch.FloatTensor(7, 32, 32, device="cuda") )

Thank you @albanD, however l get the following error

***** RuntimeError: legacy constructor for device type: cpu was passed device type: cuda, but device type must be: cpu**

Ho right my bad. the legacy api with capital letters does not support this.
The new api for this is: torch.empty([7, 32, 32], dtype=torch.float, device="cuda") (keep in mind that the returned Tensor (like with the old api) contains uninitialized memory).

1 Like

Hi @albanD

The following works :

self.weight = torch.nn.Parameter( torch.FloatTensor(7, 32, 32)).to('cuda')

Thank you

Good to hear it’s working, although I would think you’ll get an error at some point in your code, as the cuda() call creates a non-leaf tensor.

Yes, as @ptrblck said, you might want to double check that this is still detected as a parameter properly (it won’t).
You want to use self.weight = torch.nn.Parameter(torch.empty([7, 32, 32], dtype=torch.float, device="cuda")) to make sure that what is saved is an nn.Parameter and not what is returned by the .to() operation (which will be a Tensor).

This will create an empty tensor. How can i create a memory allocated cuda tensor. Want to initalise it in model and use it as W param for traning purposes. Is using a Variable good option?

torch.empty will create a tensor with uninitialized memory, so it won’t really be “empty” in the sense that no memory is used.

No, Variables are deprecated since PyTorch 0.4.

1 Like