# Depthwise convolution for gradient filters

I am trying to make a Sobel-type filter by adapting the follwoing Tensorflow code into Pytorch.

``````def get_gradient_filters():
np_grad_x = np.asarray([[-3,0,3], [-10,0,10], [-3,0,3]], dtype=np.float32).reshape((3, 3, 1, 1))

image=skimage.data.astronaut().astype(np.float32)/255.

tf_f=tf.expand_dims(tf.constant(image),0)
``````

Here is my attempt which does not give the same result.

``````def gradient_filter(C):
np_grad_x = np.asarray([[-3, 0, 3], [-10, 0, 10], [-3, 0, 3]], dtype=np.float32).reshape((3, 3, 1, 1))
groups=C, bias=False)
return filter

M, N, _ = torch_f.shape
return filter(torch_f.unsqueeze(dim=0).permute(0, 3, 1, 2)) /(1./max(M,N))

torch_f=torch.Tensor(image)
``````

I also tried some minor variations (first permute then reshape) but it is still not the same. Any ideas?

How large is the max. absolute difference? If it’s in the range of approx. `1e-6` you might be running into the floating point precision for `float32`.

The differences are not that small (some values in outputs are even of size 1e2) but for some examples of images it did seem as if every term of one output is corresponding the term in other output with some precision error. What do you recommend in that case?

Edit: I tried with float64 and the difference is indeed now at most 1e-5. Is it possible to even decrease this?

Did you see a difference of `1e2` for some output values and others were much smaller?
If so, this sounds like an overflow issue, which I wouldn’t expect to happen in `float32`.

I can not reproduce the example with difference `1e2` right now and I think it was caused by something else at that moment. It seems that my code might be correct after all and that difference is due to floating point precision.