In a CNN-based model, for example, with five conv layers and 2 FCN layers in which the input is the image, how can I find out which pixel of the input corresponds to the output of each layer, for example, FCN1?
Is it possible?
Sure you can compute something like torch.autograd.grad(output, input=(input,), grad_output=v)
where v is zeros_like(output) where v[i][j] = 1.
Doing that produces a tensor the same size as input, where non-zero values of that tensor correspond to i,j of the output.
Thanks for your reply.
Unfortunately, I didn’t understand how this solution could work.
Where should I use this code? Is it necessary to train the model again? (I’m using a pre-trained model)
I didn’t find out how conv3 or fcn2 layers’ output could be related to a certain pixel of the input image, for example, 3320320.
How can I keep and pass the indices of pixels of the input image in the next layer while the size of their output has decreased compared to the input?
Imagine the size of the fcn1 features map is 100. I need to know how I can find out feature no 84 is related to which pixel of the input image.
Could you please give me an example code?
No need to train the model again.
If you want to go from indices in the input images to the indices in the intermediate layers that is even more straightforward. You just need to pass a zero-filled input image with a single pixel what you wish to trace marked as 1, and then monitor which values in the intermediate layers are not zero.
input = torch.zeros(30, 30) input = 1 output = fn(input) # the indices at which the output is non-zero are related to the pixel you selected in the input
Thanks for your reply,
I don’t know the index of the input image pixel.
On the contrary, I want to know which input image pixel corresponds to a specific index in the intermediate layers’ output.
Here is my code:
output = model.conv2(model.conv1(input_image)) v = torch.zeros_like(output) gradients = torch.autograd.grad(output, inputs=(input_image,), grad_outputs=v, retain_graph=True) input_gradients = gradients print(input_gradients.shape) torch.Size([1, 3, 320, 320])
I ran this code to find [i][j]=1:
nonzero_indices = torch.nonzero(input_gradients) print(nonzero_indices)
But there is no index with value 1.
I tested other layers too.
It’s usually not related to just 1 pixel but a number of pixels. Convolution kernels are good at learning to find certain edges, where ever they may be in the image. Every kernel gets applied to ALL of the image.
Consider a cats nose. It’s distinct from, say, human nose. But may be similar to a fox nose. The kernels will distill that part of the image into a signal of that feature that gets used by the linear layer, along with other features, to logically determine if that is a fox or a cat.
So to answer your question, there isn’t a direct pixel path like that in convolution networks.
BUT what you could do, if you want to know more about the process, is merge the outputs of your first conv layer into 1 channel and view it visually. And then do the same for other steps.
suboutput = torch.mean(model.conv1(cat_image), dim = 1) #and then convert that to a black and white image for viewing with your preferred viewing library
Thanks for your reply. I need to map between input image indices and layers feature maps.
Maybe my question is strange: Can the related indices of the input image in each step be saved and passed to another step during training?
What you’re asking regarding per-pixel indices is functionally a mathematical question. What I recommend you try to do first is take a spreadsheet and map out just 1 kernel, say 3x3, (can be randomized) and a tiny input image(also randomized), say 4x4 with just one channel. Then perform the convolution operation on the image. For me, it helps to visualize what I’m doing before I try to dive into coding it. Once you have a clear understanding of the math behind it, I think you’ll better be able to define what you’re trying to accomplish.
As far as my understanding is concerned, regarding the vast majority of convolution networks, the original pixel index expands into 4 or more pixels every time you run it through a convolution layer*. So, ideally for a good classifier, by the time you get to the Fully Connected part of the model(i.e. the Linear layers), every pixel in the image has interacted with every part of the vector that gets passed on. I think you’ll see this and get a better intuition for the question you’re proposing if you try working it out in a spreadsheet, first.
*The one exception to this is if you put the stride equal to the kernel size.