How to ignore white pixels during training?

I have a training data set of objects, all with a white background so the neural network can learn the objects themselves without any background noise.
After training when I use an object with some background for testing the accuracy, this happens: the test image has the object + the background so there are no white pixels, hence the cnn outputs the object that had the least amount of white pixels during training.

How can I ignore white pixels in an image during training in pytorch?

I made an attempt like so:

forward function of the neural net

def forward(self, x):
# store original values
y = x
# pass values through first layer
x = F.pad(input=F.relu(self.conv1(x)), pad=(2, 2, 2, 2), mode=‘constant’, value=0)

    for batch in range(len(x)): # amount of batches 
        for feature in range(len(x[batch])): # 16 features as defined in conv layer
            for height in range(len(x[batch][feature])): # height
                for width in range(len(x[batch][feature][height])): # width
                    if (y[batch][0][height][width] == 1.0 and y[batch][1][height][width] == 1.0 and y[batch][2][height][width] == 1.0): # the color is white
                        x[batch][feature][height][width] = 0 # zero the feature

But this doesn’t work.

Perhaps, you can run some filters like median-filter to filter out some noise pixels.