How to make changes in a mac unit of a kernel in cnn

we want to access a mac unit in a kernel of each layer of a pretrained network while prediction to perform changes in it. we are new to pytorch and are having issues in implementing the same. we also did post training quantization of weights and activation to 8bits. Our defined cnn is as follows:

class Simplenet(nn.Module):
def init(self):
super(Simplenet, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.relu_conv1 = nn.ReLU()
self.pool1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.relu_conv2 = nn.ReLU()
self.pool2 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.relu_fc1 = nn.ReLU()
self.fc2 = nn.Linear(120, 84)
self.relu_fc2 = nn.ReLU()
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
    x = self.pool1(self.relu_conv1(self.conv1(x)))
    x = self.pool2(self.relu_conv2(self.conv2(x)))
    x = x.view(-1, 16 * 5 * 5)
    x = self.relu_fc1(self.fc1(x))
    x = self.relu_fc2(self.fc2(x))
    x = self.fc3(x)
    return F.log_softmax(x, dim=1)

It depends a bit on what you would like to manipulate.
If you want to change the underlying computation (multiplication of kernel and patch + bias), you would most likely have to write your own convolution (called e.g. in Convolution.cpp).

If you want to run some experiments first and make sure your method works correctly, I would recommend to use .unfold to create patches of your input activation, and apply your custom operations in a similar manner as your conv layer would do.
This approach would be slower than the optimized conv code, but might be good enough for your experiments.