Hi PyTorch community,
I have a question regarding tensor optimization in PyTorch. I would like to know if it’s possible to have a tensor where some parts are trainable (i.e., requires_grad=True
) and other parts are fixed (i.e., requires_grad=False
).
Context:
I’m working with a regression model and I want to freeze the weights of the model. Then, I want to train only part of the input vector to minimize the model’s output. However, I want some parts of the input vector to remain fixed during training.
Example:
Let’s say I have an input vector of 9 parameters, where the first 3 should be fixed and the remaining 6 should be trainable. Here is the code I have tried:
import torch
import torch.nn as nn
import torch.optim as optim
# Suppose this is your regression model
class RegressionModel(nn.Module):
def __init__(self):
super(RegressionModel, self).__init__()
self.fc1 = nn.Linear(9, 1) # A model with a single linear layer
def forward(self, x):
return self.fc1(x)
# Instantiate the model
model = RegressionModel()
# Freeze the model weights
for param in model.parameters():
param.requires_grad = False
# Create the input
input_vector = torch.tensor([0.19463597, 1.436385601, 0.028681654,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5], requires_grad=True)
input_vector = input_vector.unsqueeze(0)
fixed_params = [0, 1, 2]
for idx in fixed_params:
input_vector[0, idx] = torch.tensor(input_vector[0, idx].item(), requires_grad=False)
# Here I have this error: RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
Question:
Is there a way to directly have parts of a tensor with requires_grad=True
and other parts with requires_grad=False
?