Construct Tensor with Gradient from Tensors with Gradient

I am trying to construct a tensor from other tensors (which requires a gradient). The new tensor should also be differentiable

So what I am trying to do is essentially:

import torch

x = torch.Tensor(1., requires_grad=True)
y = torch.Tensor(2., requires_grad=True)
z = torch.Tensor(3., requires_grad=True)
a = torch.Tensor([x*y, 1/(x*y), x/z])

print(a)  # output: tensor([1., 1.])

So for x, y, z I can get a gradient separately but not for “a” although “a” is constructed of x, y, z.
I know about index_put which seems to preserve the differentiability (at least in this example it shows a grad_fn):

target = torch.zeros([5,3])
indices = torch.LongTensor([[0,1], [1, 2], [2, 2], [3, 0], [4, 1]])
value = torch.ones(indices.shape[0], requires_grad=True)
target.index_put_(tuple(indices.t()), value)

print(target)
'''
output:
tensor([[0., 1., 0.],
        [0., 0., 1.],
        [0., 0., 1.],
        [1., 0., 0.],
        [0., 1., 0.]], grad_fn=<IndexPutBackward>)
'''

However, for my use case and not just a toy example like above it would be tedious to construct the tensor with index_put and matrix operations.

Is there a way to do this similar to my first example? Is there a function I just don’t know about? Or do I have to do it the long way?

Recreating a tensor detaches it from the computation graph, so you could use torch.stack and/or torch.cat to create a.

Thanks for your reply. This seems simpler than what I am doing right now.

Was experimenting a little and resorted to a solution where I would create a zero tensor and set the entries via indexing and that also seemed to work.

a = torch.zeros(1,3)
a[0,0] = x*y
a[0,1] = 1/(x*y)
a[0,2] = x/y