I am trying to write a caffe model in pytorch but they are using a crop module not implemented in pytorch, which is taking as input a [1,3,224,224] tensor and a [1,1,300,300]
and output a [1,3,224,224] tensor which should be a cropped version of the [1,1,300,300].
I tried slicing but I till end up with a [1,1,224,224] tensor and I really don’t understand what they are doing. Any idea what is happening and how I should do it in pytorch ?
I’m trying to read the caffe docs and it looks like they’re cropping a [1,1,300,300] to the size [1,3,224,224]. Not sure what the behavior is when some dimensions of the resulting crop are larger than the original input. It’s possible that the values are repeated, or zero-filled.
That is what I thought but I didn’t find any way to do it with pytorch except slicing my tensor and not taking care about the [1,3,224,224] tensor. Still searching tho.