Assume, I have representation
x = torch.rand(4,8) for my input sentence. (without batch dimension, it is just a single sentence containing 4 words). I want to get 1x8 dimensional tensor as output. I can obtain this with a
maxpooling operation. However, since all of my sentences have different number of words, the first dimension of the x tensor will always be different and I have to therefore pad the input matrix. Instead, I wonder can I basically do
If you would create the max pooling layer so that the kernel size equals the input size in the temporal or spatial dimension, then yes, you can alternatively use
Based on the input shape and your desired output shape of
[1, 8], you could use
torch.max(x, 0, keepdim=True).
Alternatively, have a look at adaptive pooling layers, which yield the same output shape for variable sized inputs.
how to implement maxpool 2d using gather and unfold and squeeze
but then , what is the use of squeeze function if we want to use it. in conv2d, we use unsqueeze . how can we use squeeze in max pooling?
squeeze just removes dimensions with a size of 1. I’m unsure, if and where you need it and what issue you are seeing with it.
I got it, Thank you so so so much. once again thank u
i took max indices and used gather to get the value . my dimensions were [a,b,c,d,e] , so i used squeeze on the last dimension (e) to remove it and got correct result.
I have a question on image tweaking to trick or fool a model. There are 2 scenarios, 1 to get a higher accuracy (positive tweaking) and second to get a lower accuracy (negative tweaking) as compared to the trained model. I was thinking to add the gradient of the image to the original image and call it adversarial image. is the concept correct?
like, fake_img = original_img + grad_original image
please help me to understand
It depends what the gradient in the input image represents, i.e. what the model is supposed to learn and what the loss represents, which is used to calculate the gradient.
If you want to change the input data in order to reduce the loss, your approach might work.
However, since there don’t seem to be any constraints, I guess your input image might look “unnatural” after a few iterations.