Should convolution with (+1,-1) valued kernel be faster than with a regular kind?

I apologize for a long question and if it seems a really strange thing to ask.

If I have an N by N input image and I convolve it with a kernel that consists of (+1,-1) values, should this operation be faster then if kernel had random numbers (say, from a standard normal distribution)? To me, it seems that during the convolution operation the multiplication of the image and kernel values leads to a simple sign change but it still counts as regular multiplication and hence does not really affect the operation count and overall complexity. However, my colleague tells me that I’m wrong and we should see an increase in speed, which makes me sort of doubt that I am implementing everything correctly.

So far I have tried using timing magic in Jupyter notebooks to measure the time per convolution with (+1,-1) kernels and with regular kinds. I saw no difference. My main doubt is: should there be any at all? Is there a way to speed this up for such a specific kernel?

from torch.nn import functional as f
import torch

kernel = torch.randn((255,3,3,3)).sign_().float()
image = torch.randn((1,3,224,224)).float()

%%timeit
f.conv2d(image, kernel, stride=1, padding=1,)

This gives me: 10 loops, best of 3: 78.3 ms per loop
while running without sign_() operation yields 78.9 per loop.

Should there be any significant difference?