Lets say I have the following network:
class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = costumefuntion(x) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def costumefunction: with torch.no_grad(): output = a couple of functions and operations to do something stupid return output
can i design a costumes function and use it inside my network as above and disable the backprob just for my costumefunction like what i did?
my concern is that i want to disable backprob for the function and inside the function, but i dont want the backprob become disabled in the rest of the forward passt. meaning that i want to have back propagation for steps after
x = costumefuntion(x) again.
is it even possible?