How to convert tf.slice to pytorch

Hello, I want to implement smooth Loss function for image by following the ImageDenoisingGAN paper (in this paper, they calculate the smooth loss by slide a copy of the generated image one unit to the left and one unit down and then take an Euclidean distance between the shifted images). so far their tensorflow coding like this :

def get_smooth_loss(image):
    batch_count = tf.shape(image)[0]
    image_height = tf.shape(image)[1]
    image_width = tf.shape(image)[2]

    horizontal_normal = tf.slice(image, [0, 0, 0,0], [batch_count, image_height, image_width-1,3])
    horizontal_one_right = tf.slice(image, [0, 0, 1,0], [batch_count, image_height, image_width-1,3])
    vertical_normal = tf.slice(image, [0, 0, 0,0], [batch_count, image_height-1, image_width,3])
    vertical_one_right = tf.slice(image, [0, 1, 0,0], [batch_count, image_height-1, image_width,3])
    smooth_loss = tf.nn.l2_loss(horizontal_normal-horizontal_one_right)+tf.nn.l2_loss(vertical_normal-vertical_one_right)
return smooth_loss

I want to convert this Tensorflow code to Pytorch but still can’t figure it out. Could someone help to convert it to Pytorch or any suggestion?
Thanks

You can just use indices to get your slice:

b, c, h, w = image.size()
horizontal_normal = image[:, :, :, w-1]
horizontal_one_right = image[:, :, :, 1:w-1]
...

I’m not sure, if I just don’t understand the tf.slice operation, but it looks like both horizontal* images have a different width. Shouldn’t horizontal_one_right be sliced as [:, :, :, 1:w]?
If not, you won’t be able to calculate the difference between both sliced images due to different sizes.

Assuming it’s a typo, you could calculate the loss as:

loss = torch.pow(horizontal_normal-horizontal_one_right, 2).sum() / 2. + torch.pow(ver...
1 Like

Hi, Thanks for your reply. Because the tf.slice operation is " This operation extracts a slice of size size from a tensor input starting at the location specified by begin . The slice size is represented as a tensor shape, where size[i] is the number of elements of the 'i’th dimension of input that you want to slice. The starting location ( begin ) for the slice is represented as an offset in each dimension of input". So I dont think their coding wrong.
But for Pytorch, you are right. we need to give the same width. So i changed it following your guide like this:

def get_loss(self, image):
        b, c, h, w = image.size()
        horizontal_normal = image[:, :, :, 0:w-1]
        horizontal_one_right = image[:, :, :, 1:w]
        vertical_normal = image[:, :, 0:h-1, :]
        vertical_one_right = image[:, :, 1:h, :]
        loss = torch.pow(horizontal_normal-horizontal_one_right, 2).sum() / \
            2. + torch.pow(vertical_normal - vertical_one_right, 2).sum()/2.0
        return loss

I hope it get same operation with tf.slice in this case ^^

Use python directly

tf.slice(x, begin=[a1, a2, a3, a4], size=[b1, b2, b3, b4]) -> x[a1:a1+b1, a2:a2+b2, a3:a3+b3, a4:a4+b4]

2 Likes

What if you don’t know begin and size explicitly? Let’s say begin is an list a and size is a list b. And I need something for tensor t of sort, t[ a[1]:b[1],…a[len(a)]:b[len(b)]. But I can’t iterate in index and can’t have colon in any data structure.