pytorch version 0.3
Conv2d_mask[group_index_in*g_size_out: (group_index_out + 1)g_size_out, group_index_in g_size_in:(group_index_in+1)*g_size_in, :, :] = 1.
ValueError: result of slicing is an empty tensor
May I know the reason for the following error,
I have checked the following post but I did not understand?
Hi,
I am trying to create a variable-length sequence for a LSTM.
Can anyone help me understand why this code works:
lens = list(range(170,1,-1))
xs = Variable(torch.randn(169, 200, 1))
packed = torch.nn.utils.rnn.pack_padded_sequence(xs, lens, batch_first=True)
and this code does not:
lens = [294, 289, 288, 282, 273, 270, 261, 260, 240, 235, 231, 228, 228, 227, 226, 226, 199, 195, 194, 192, 190, 189, 177, 176, 165, 165, 161, 156, 153, 149, 149, 142, 142, 137, 136, 136, 135, 134, 134, 132, 1…
Could you post an executable code snippet?
Also I would suggest to upgrade PyTorch to 0.4.0
and see, if this error still occurs.
The issue has been resolved. How to initialize the convolutional weights for different instances of same model? I don’t want to use copy
Just apply the weight_init
with each instance:
model1 = Net()
model1.apply(weight_init)
model2 = Net()
model2.apply(weight_init)
Or would you like use the same initialized weights?
If so, you could copy the state_dict
.