Computing Gradients with respect to Input Features given model parameters

I’m new to Pytorch and I’ve been going through the tutorials but I feel like I don’t properly understand the modules for computing gradients. Specifically, I am trying to understand if there are methods to compute the gradient of a model with respect to specific inputs. Consider the following minimum working example: a simple Neural Network with a single hidden layer.

import torch.nn as nn
import torch.nn.functional as F

class NeuralNet(nn.Module):
    def __init__(self, n_features, n_hidden, n_classes, dropout):
        super(NeuralNet, self).__init__()

        self.fc1 = nn.Linear(n_features, n_hidden)
        self.sigmoid = nn.Sigmoid()
        self.fc2 = nn.Linear(n_hidden, n_classes)
        self.dropout = dropout

    def forward(self, x):
        x = self.sigmoid(self.fc1(x))
        x = F.dropout(x, self.dropout, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)

I instantiate the model and an optimizer as follows:

import torch.optim as optim
model = NeuralNet(n_features=args.n_features,
            n_hidden=args.n_hidden,
            n_classes=args.n_classes,
            dropout=args.dropout)
optimizer_w = optim.SGD(model.parameters(), lr=0.001)

During training, I compute the gradients of the negative log-likelihood loss w.r.t. the model parameters by doing the following:

def train(epoch):
    t = time.time()
    model.train()
    optimizer.zero_grad()
    output = model(features, adj)
    loss_train = F.nll_loss(output[idx_train], labels[idx_train])
    acc_train = accuracy(output[idx_train], labels[idx_train])
    loss_train.backward()
    optimizer_w.step()

for epoch in range(args.epochs):
    train(epoch)

For an experiment, I am interested in “updating” some of the input features (say the last d features). Given the model above, and the same loss_train, can I define an optimizer_f w.r.t to features and not model.parameters(), and correspondingly compute loss_train.backward(), perform an optimizer_f.step()?
Assume that the model parameters have been computed as described above in the standard manner so that all required values for the differentiation are available.

Hi,

Yes you can do that, and here is a small example from docs. You can pass the parameter (you want to update) to the optimizer, and others will fixed when you call optimizer_f.step().

Hi,

Thanks for your response. I need to clarify what I am trying to do.

In the sample code above, during each iteration of train(epoch), after updating the weights of the model as described, I want to use these weights or model.parameters() to “update” some of the features. i.e. features[:, f:] for some value f < features.shape[1]:

def train(epoch):
    # train model
    # update weights

    # update the last (features.shape[1] - f) features <--

for epoch in range(args.epochs):
    train(epoch)

The last (features.shape[1] - f) features need to be updated by computing the gradient of the error w.r.t. these features (given the recently computed model weights/parameters) and taking a gradient step.