I have a tensor x and a parameter tensor R. I want to store the matrix multiplication of x and R into z and also make sure that z does not require any gradients. How do I do that? Please help
.....
z = x @ self.R
.....
I have a tensor x and a parameter tensor R. I want to store the matrix multiplication of x and R into z and also make sure that z does not require any gradients. How do I do that? Please help
.....
z = x @ self.R
.....
Wrapping the operation into a no_grad()
context would work:
R = nn.Parameter(torch.randn(1, 1))
x = torch.randn(1, 1)
with torch.no_grad():
z = x @ R
print(z.grad_fn)
> None