How to repeat a tensor and get an nn.parameter while keeping gradient

I want to repeat a tensor like this:

import torch
import torch.nn as nn

pos = [0.1, 0.2, 0.3]
pos_tens = torch.Tensor(pos)
pos_pm = nn.Parameter(pos_tens, requires_grad=True)
xyz = nn.Parameter(pos_tens.repeat(10, 1), requires_grad=True)

trans = pos_pm.repeat(10, 1) 
# <class 'torch.Tensor'> hope it can pass grad to pos_pm,and it must be an nn.parameter
trans_n = nn.Parameter(pos_pm.repeat(10, 1))
trans.retain_grad()
new = xyz + trans_n

y = new.sum()
y.backward()

print(xyz.grad) # tensor([[1., 1., 1.],...,[1., 1., 1.]])
print(trans.grad) # None
print(pos_pm.grad) # None, hope it not none

For some reason, I want make B an nn.parameter , and when running “B.backward()”, “A.grad” is not none.
I tried this above, but not work.
What should I do?

Thank you very much for your patient answer, which is very helpful for beginners like me.

I have tried the advice you provided, but there are still some things I don’t understand. Because this is my first time asking a question here, the description of the question is somewhat unclear.

  1. I have tried using trans=pos_pm. repeat (10,1) , which can be used for pos_ pm preserves gradients, so I don’t understand why should I use pos_pm.unsqueeze(0).repeat(10, 1).

  2. I made some mistakes in my statement. I hope the final result is an nn.Parameter, which is the new in the code. This section of code was abstracted from my actual code, but I made some mistakes that led to confusion in the problem.

Actually, this is how I implemented it:

class Transformation(nn.Module):
    def __init__(self, trans):
        '''
        trans(1*3)
        '''
        super(Transformation, self).__init__()
        self.trans = nn.Parameter(trans, requires_grad=True)
        
    def apply_transform(self, xyz: nn.Parameter):
        '''
        input xyz(n*3): nn.parameter
        output: (n*3)nn.parameter
        '''
        xyz_n = xyz.data.shape[0]
        trans = self.trans.repeat(xyz_n, 1).cuda()
        xyz_new = torch.nn.Parameter(xyz + trans)
        print(f"types {type(xyz_new)}")
        return xyz_new
    

But here’s xyz_new is a tensor and cannot be assigned to nn.Parameter like this:

postrans = Transformation([0.1,0.1,0.1])
self.mean = postrans.apply_transform(self.mean)

Will get TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'mean' (torch.nn.Parameter or None expected).