I wish to apply a nn.Linear() layer only to a part of a Variable. How to go about it?
As an example consider a Variable feat of type torch.cuda.FloatTensor of size 256x20x51, stored in a batch–first representation. I wish to apply the linear layer on just one of 20 (1x51 vectors) for each batch_number.
So for example if the feat variable is something like:
Yes, I tried using that. But it results in the following error:
input_x = fc(input_x)
*** RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 20 objects sharing it
In general, consider the following code:
(Pdb) feat = torch.autograd.Variable(torch.Tensor([[1, 2, 3], [4, 5, 6]]))
1 2 3
4 5 6
[torch.FloatTensor of size 2x3]
(Pdb) feat = 9
*** RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it
Any idea about how to get around it?
Also, are you sure that using .data always messes up autograd?
You are doing exactly what I said that doesn’t work…
This is bound to fail. It is first doing a slicing and getting feat, and then do a __setitem__. But after you did the slicing, you already get two objects sharing the same storage (feat and feat), both in scope, so it doesn’t work. feat[0, 0] = 9 works as expected.