The tensor can't be autograded with unsqueeze(2)

``````rho = base[0][:,1:,:]
mu = base[1][:,1:,:]
temp_time = diff_time.unsqueeze(2) * \
torch.rand([*diff_time.size(), num_samples], device=data.device)
temp_time /= (time[:, :-1] +1).unsqueeze(2)
all_base = torch.zeros(rho.size())
for i in range(rho.size()[2]):
slice_of_mu = mu.clone()[:,:,i]
slice_of_mu = slice_of_mu.unsqueeze(2)
slice_of_rho = rho.clone()[:,:,i]
slice_of_rho = slice_of_rho.unsqueeze(2)
all_base[:,:,i] = all_base[:,:,i] + ( slice_of_mu * slice_of_rho * temp_time ^ ( slice_of_rho - 1 ) )
``````

size of rho and mu are [16,99,5], and size of temp_time is [16,99,100]ďĽŚI want to calculate the value of all_base as shown in the for loop, I sliced and deformed these tensors to ensure their dimensions are consistent, but I get an error in the last line:

bitwise_xor(): functions with out=â€¦ arguments donâ€™t support automatic differentiation, but one of the arguments requires grad.

File â€śâ€¦â€ť, line 47, in compute_integral_unbiased
all_base[:,:,i] = all_base[:,:,i] + ( slice_of_mu * slice_of_rho * temp_time ^ ( slice_of_rho - 1 ) )

I donâ€™t know how to solve this problem. Can you give me some help? Thank you very much.

I use a ^ rather than **, what a naive mistakeâ€¦

1 Like