Modifying intermediate values during forward

If I have a series layers in my implementation of forward, is it possible for me to inspect an intermediate value between those layers, modify it, and send it through the rest of the network? How would I go about doing something like this?

You could use forward hooks as seen in this code snippet:

import torch
import torch.nn as nn
import torch.nn.functional as F

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.fc1 = nn.Linear(1, 1)
        self.fc2 = nn.Linear(1, 1)
        
    def forward(self, x):
        x = self.fc1(x)
        x = F.relu(x)
        x = self.fc2(x)
        return x

def hook(m, input, output):
    output = output * 100.
    return output

model = MyModel()

x = torch.randn(1, 1)
out = model(x)
print(out)
out.mean().backward()
print(model.fc1.weight.grad)
model.zero_grad()

model.fc1.register_forward_hook(hook)
out = model(x)
print(out)
out.mean().backward()
print(model.fc1.weight.grad)

Hi,

The documentation of forward_hook says " The hook will be called every time after forward() has computed an output". Just wanted to clarify that if I modify the value of a certain layer of a model, that modification will the part of forward call, and hence the output of the forward call would be different from without modified forward call. Right ?