Hello, I want to implement smooth Loss function for image by following the ImageDenoisingGAN paper (in this paper, they calculate the smooth loss by slide a copy of the generated image one unit to the left and one unit down and then take an Euclidean distance between the shifted images). so far their tensorflow coding like this :

I’m not sure, if I just don’t understand the tf.slice operation, but it looks like both horizontal* images have a different width. Shouldn’t horizontal_one_right be sliced as [:, :, :, 1:w]?
If not, you won’t be able to calculate the difference between both sliced images due to different sizes.

Assuming it’s a typo, you could calculate the loss as:

loss = torch.pow(horizontal_normal-horizontal_one_right, 2).sum() / 2. + torch.pow(ver...

Hi, Thanks for your reply. Because the tf.slice operation is " This operation extracts a slice of size size from a tensor input starting at the location specified by begin . The slice size is represented as a tensor shape, where size[i] is the number of elements of the 'i’th dimension of input that you want to slice. The starting location ( begin ) for the slice is represented as an offset in each dimension of input". So I dont think their coding wrong.
But for Pytorch, you are right. we need to give the same width. So i changed it following your guide like this:

What if you don’t know begin and size explicitly? Let’s say begin is an list a and size is a list b. And I need something for tensor t of sort, t[ a[1]:b[1],…a[len(a)]:b[len(b)]. But I can’t iterate in index and can’t have colon in any data structure.