I’m new to Pytorch and I’ve been going through the tutorials but I feel like I don’t properly understand the modules for computing gradients. Specifically, I am trying to understand if there are methods to compute the gradient of a model with respect to specific inputs. Consider the following minimum working example: a simple Neural Network with a single hidden layer.
import torch.nn as nn import torch.nn.functional as F class NeuralNet(nn.Module): def __init__(self, n_features, n_hidden, n_classes, dropout): super(NeuralNet, self).__init__() self.fc1 = nn.Linear(n_features, n_hidden) self.sigmoid = nn.Sigmoid() self.fc2 = nn.Linear(n_hidden, n_classes) self.dropout = dropout def forward(self, x): x = self.sigmoid(self.fc1(x)) x = F.dropout(x, self.dropout, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1)
I instantiate the model and an optimizer as follows:
import torch.optim as optim model = NeuralNet(n_features=args.n_features, n_hidden=args.n_hidden, n_classes=args.n_classes, dropout=args.dropout) optimizer_w = optim.SGD(model.parameters(), lr=0.001)
During training, I compute the gradients of the negative log-likelihood loss w.r.t. the model parameters by doing the following:
def train(epoch): t = time.time() model.train() optimizer.zero_grad() output = model(features, adj) loss_train = F.nll_loss(output[idx_train], labels[idx_train]) acc_train = accuracy(output[idx_train], labels[idx_train]) loss_train.backward() optimizer_w.step() for epoch in range(args.epochs): train(epoch)
For an experiment, I am interested in “updating” some of the input features (say the last
d features). Given the model above, and the same
loss_train, can I define an
optimizer_f w.r.t to features and not
model.parameters(), and correspondingly compute
loss_train.backward(), perform an
Assume that the model parameters have been computed as described above in the standard manner so that all required values for the differentiation are available.