How to extend a tensor

0

I want to extend a tensor in PyTorch in the following way:

Let C be a 3x4 tensor which requires_grad = True. I want to have a new C which is 3x5 tensor and C = [C, ones(3,1)] (the last column is a one-vector, and others are the old C) Moreover, I need requires_grad = True for new C.

I need to use this tensor as a parameter.

In a first step, I optimize the parameters in N epochs.
In the second step, I need to extend the parameters (something I sad before, and I used
torch.nn.functional.pad and the parameters are extended)
In the third step, I should optimize the extended parameters, but parameters are the same as before and not extended.

Is it possible to inform me about this problem?

Did you try to set requires_grad=True for the extended tensor?
Another possible way is that you extend C.data instead of C directly.

Thanks for your reply.
I used :

param = torch.nn.functional.pad(param, (0,1,0,0), "constant", 1.0).detach().requires_grad_()

Then check the shape and it is correct.
but after the optimization the parameters are the same as before and not extended.

Don’t detach the tensor.

I appreciate your time to help me.
even I do not detach it, it does not work and the parameters are same size as before extending.

could you please have a look at part of my code:

   for epoch in range(1, n_epochs+1):
        
        
        train_loss = calc_forward(train_tu, train_tc, model, loss_fn, is_train=True)
        val_loss = calc_forward(val_tu, val_tc, model, loss_fn, is_train=False)
        
        
        optimizer.zero_grad()
        train_loss.backward()
        optimizer.step()
        
        
     
    for name, param in model.named_parameters():

        if name == '0.weight':
            
            param = torch.nn.functional.pad(param, (0,0,0,1), "constant", 1.0).requires_grad_()
            
        if name == '0.bias':
            
            param = torch.nn.functional.pad(param, (0,1), "constant", 1.0).requires_grad_()

        if name == '2.weight':
            
            param = torch.nn.functional.pad(param, (0,1,0,0), "constant", 1.0).requires_grad_()

        if name == '2.bias':
            print(name, param.shape, param)
            param = torch.nn.functional.pad(param, (0,1), "constant", 1.0).requires_grad_()
            print(param.shape, param)
            
    for epoch in range(1, n_epochs+1):
        
        
        train_loss = calc_forward(train_tu, train_tc, model, loss_fn, is_train=True)
        val_loss = calc_forward(val_tu, val_tc, model, loss_fn, is_train=False)
        
        
        optimizer.zero_grad()
        train_loss.backward()
        optimizer.step()
        
        
    for name, param in model.named_parameters():
        print(name, param.shape, param)

my model is really simple:

model = nn.Sequential(nn.Linear(1, 4), nn.Tanh(), nn.Linear(4, 1))

I don’t think your way of modifying the weight is correct. You should try model.layer.weight = [.....] and model.layer.bias = [....]

Yes, you’re right.
I should assign the parameters value, not the param. Thanks a lot for informing me.
I need to find a way to assign the parameters of the model.