Random gapfilling with values from pixel distribution

I want to achieve the following and am wondering if there is a nice, efficient way in PyTorch:

Given an RGB Image as a 3D-Tensor with gaps at certain pixel locations, I want to fill these gaps with randomly drawn pixels from the distribution of existing pixels inside that image.

Does anyone have any idea? Thank you very much in advance!

Here is a small example, inside

a = torch.Tensor(
                  [[[3,4,0],[0,2,3],[1,3,0]],
                   [[1,3,0],[0,2,1],[1,2,0]],
                   [[3,3,0],[0,1,3],[1,2,0]]])

replace all triples [0,0,0] along the first axis (axis = 0) with randomly drawn triplets from the other triplets in a along the first axis.

So a possible solution would be

a_filled = torch.tensor(
                         [[[3,4,2],[1,2,3],[1,3,2]],
                          [[1,3,2],[1,2,1],[1,2,2]],
                          [[3,3,1],[1,1,3],[1,2,1]]])

another one would be

a_filled = torch.tensor(
                         [[[3,4,4],[3,2,3],[1,3,1]],
                          [[1,3,3],[2,2,1],[1,2,1]],
                          [[3,3,3],[2,1,3],[1,2,1]]])

What do you mean by the distribution of existing pixels?

Here is a small example:

a = torch.Tensor(
                  [[[3,4,0],[0,2,3],[1,3,0]],
                   [[1,3,0],[0,2,1],[1,2,0]],
                   [[3,3,0],[0,1,3],[1,2,0]]])

I would like to replace all pixels a[i,j,k] == [0,0,0] with randomly drawn pixels != [0,0,0] from a (with replacement).
It is important, that the three channels are not treated independently but jointly… (to keep the “colors”).
Does this clear things up?

How about this? But I’m not sure if you want the mean of all dims together or separately for the channels.

a = torch.tensor(
[[[3,4,0],[0,2,3],[1,3,0]],
[[1,3,0],[0,2,1],[1,2,0]],
[[3,3,0],[0,1,3],[1,2,0]]], dtype=torch.float)

a[a==0] = a.mean()
a = a.to(torch.long)

Thank you for the suggestion!
I don’t want to gapfill with the mean, rather I want to randomly sample the gap fillings from the distribution of the other pixels, so in this example i randomly want to draw pixels from the set {[3,1,3],[4,3,3],[2,2,1],[3,1,3],[1,1,1],[3,2,2]} to fill the three gaps ([0,0,0]) in a.

So a possible solution would be

a_filled = torch.tensor(
                         [[[3,4,2],[1,2,3],[1,3,2]],
                          [[1,3,2],[1,2,1],[1,2,2]],
                          [[3,3,1],[1,1,3],[1,2,1]]])

another one would be

a_filled = torch.tensor(
                         [[[3,4,4],[3,2,3],[1,3,1]],
                          [[1,3,3],[2,2,1],[1,2,1]],
                          [[3,3,3],[2,1,3],[1,2,1]]])

You can do this, but its really inefficient ^^

import random

a_set = torch.unique(a)
a[a==0] = torch.tensor([random.choice(a_set) for _ in range(len(a[a==0]))])

I know, whatever comes out will likely be inefficient.
Still your proposed solution is not what i desire. It just fills the three values inside each pixel with random values out of the set {0,1,2,3,4} assuming a uniform distribution. I don’t want that. I want to fill it with actual pixels fro my image a, which requires me to 1. keep the RGB-triplets together 2. not sample over unique triples but over all triplets.

Oh i got i wrong…we are talking about the shape HWC, right? And those ‘faulty’ pixels triplets have the value [0, 0, 0]?

EDIT: Sry i mean CHW ^^

1 Like

actually i thought my example is CHW, but does not matter. Yes the faulty pixels have value [0,0,0].

For every 0 triplet, sample from a gaussian with mean and std from neighbouring pixels. You can consider a 5x5 window pixels around that (0,0,0) pixel and get mean, std from them.

1 Like

hmm, is a possibility but also misses the covariances between the channels RGB. An example: Say my image has either purple-ish or green-ish pixels of different brightness, so pixels of type (0,g,0) or (p,0,p). Now if i follow your approach i could end up with a pixel (p,g,p), which is not in the desired distribution of the image.

Does this matter much nowadays. CNN is pretty robust to these small differences.