I’m getting this grad_fn in the following code
axt=torch.matmul(weight_tensor,input_tensor.unsqueeze(0).unsqueeze(2))
As far as i am aware this should not cause a problem because the squeeze operations are temporary. Is that correct ?
I’m getting this grad_fn in the following code
axt=torch.matmul(weight_tensor,input_tensor.unsqueeze(0).unsqueeze(2))
As far as i am aware this should not cause a problem because the squeeze operations are temporary. Is that correct ?