How to feed changed output of a layer to network?

hello, I change output one layer and I want this change again feed to network (input this change to network). I write below code but no changes are applied to the network !!! please help me.

model.conv3.register_forward_hook(get_activation('conv3'))

x, labels = next(iter(test_loader))

a = activation['conv3']

a[9,0,1,:]=a[9,0,2,:]

a[9,0,2,:]=a[9,0,3,:]

a[9,0,3,:]=a[9,0,4,:]

activation['conv3']=a

output = model(x)

plt.subplot(1, 3, 1)

plt.imshow(output[9][0].detach().numpy(), cmap="gray") 

#print(activation['conv3'])

plt.subplot(1, 3, 2)

plt.imshow(activation['conv3'][9][0], cmap="gray") 

plt.subplot(1, 3, 3)

plt.imshow(x[9][0], cmap="gray")

please guide me for this problem

We have taken the output of a layer. We want to make changes to this output and restart the network. What process should we use?

I don’t fully understand the use case of changing an activation and “restart” the network, as the activation wouldn’t be used, if you reset the model training.
In any case, you should manipulate the activation in the forward hook, if you want to pass the changed activation to the next layers.

I manipulate the output of one layer and then this changed output feed to following layers. I used this code but I can’t replace this change to output of layer in model.

model.conv3.register_forward_hook(get_activation('conv3'))

x, labels = next(iter(test_loader))

a = activation['conv3']

a[9,0,1,:]=a[9,0,2,:]

a[9,0,2,:]=a[9,0,3,:]

a[9,0,3,:]=a[9,0,4,:]

activation['conv3']=a

output = model(x)

plt.subplot(1, 3, 1)

plt.imshow(output[9][0].detach().numpy(), cmap="gray") 

#print(activation['conv3'])

plt.subplot(1, 3, 2)

It seems you are still manipulating the activation after the forward pass finished, so you could take a look at this example to see how to manipulate them inside the hook:

model = nn.Sequential(
    nn.Linear(1, 1),
    nn.ReLU())
model[0].register_forward_hook(lambda m, input, output: output + 1000.)

x = torch.ones(1, 1)
out = model(x)
print(out)
> tensor([[1000.5511]], grad_fn=<ReluBackward0>)