wpron
(wiktor)
September 19, 2017, 2:48pm
1
Hi all,
I use PyTorch version 0.2.0_4 and get an IndexError which I cannot explain:
print("X:", x.size())
print("TYPE:", type(self.neuron_map[k]))
gives
X: torch.Size([25, 8])
TYPE: <class 'list'>
Now
x[:, self.neuron_map[k]]
results in
IndexError: When performing advanced indexing the indexing objects must be LongTensors or convertible to LongTensors
I cannot understand why this happens and I have no idea how to fix this. Any help appreciated.
smth
September 20, 2017, 4:42am
2
can you do:
print(self.neuron_map[k])
, I’m curious of it’s contents.
Also try:
x[:, torch.LongTensor(self.neuron_map[k])]
wpron
(wiktor)
September 20, 2017, 8:01am
3
print("INDS:", self.neuron_map[k])
results in:
INDS: [0, 1]
Then,
inds = torch.LongTensor(self.neuron_map[k])
runs into
RuntimeError: tried to construct a tensor from a int sequence, but found an item of type numpy.int64 at index (0)
I actually found a workaround:
inds = np.array(self.neuron_map[k], dtype=np.int64)
inds = torch.LongTensor(inds)
nn_list.append(self.linears[k](x[:, inds]))
I actually have an additional question. The reason, I am splitting the tensor is to apply linear units (like in last posted code line). For the result, i use:
x_out = torch.cat(nn_list, 1)
How efficient is this, as compared to manually implement an autograd.Function (forward and backward)?
smth
September 20, 2017, 8:05pm
4
it should be pretty efficient if x[:, inds]
is large enough. the matrix multiply will prob dominate the cost.
Writing a batched matrix multiply by hand is not easy to do efficiently.