I have some problem about expand function, can someone help me please?

I still can’t figure out how to make this work yet :cry: it is depressing:

  • When I use expand function, the forward works perfectly, however, the program crash on backward function:

multiplied_mat = CNN_Result.clone() # Clone for each GRU iteration
expanded_alpha_mat = alpha_mat.expand(current_tensor_shape)
multiplied_mat = multiplied_mat * expanded_alpha_mat

alpha_mat is {batchsize} x 16 x 32
multiplied_mat is {batchsize} x 16 x 32 x 128 (this is current_tensor_shape)

And when i run the code, the program crash with error:

Traceback (most recent call last):
File “Main. py”, line 40, in
testnet.train(epoch + 1)
File “E:\Workbench\DatasetReader\new\LVTN_MER-master\Network\CNNNetwork. py”, line 172, in train
loss.backward()
File “E:\Anaconda\lib\site-packages\torch\autograd\variable. py”, line 144, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File “E:\Anaconda\lib\site-packages\torch\autograd\function. py”, line 90, in apply
return self._forward_cls.backward(self, *args)
File “E:\Anaconda\lib\site-packages\torch\autograd_functions\tensor. py”, line 95, in backward
return grad_output.contiguous().view(ctx.old_size), None
File “E:\Anaconda\lib\site-packages\torch\autograd\variable. py”, line 468, in view
return View.apply(self, sizes)
File “E:\Anaconda\lib\site-packages\torch\autograd_functions\tensor. py”, line 89, in forward
result = i.view(*sizes)
RuntimeError: size ‘[1 x 512]’ is invalid for input of with 65536 elements at D:\Downloads\pytorch-master-1\torch\lib\TH\THStorage.c:59

The size [1 x 512] (in the last line) comes from the code:

alpha_mat = self.alpha_softmax(alpha_mat.view(current_tensor_shape[0], 512)).view(current_tensor_shape[0], 16, 32)

Thank you in advance :cry: