Why are weights (or kernel) floats and not integers in torch.nn.Conv2d?


I am learning about CNN using PyTorch.

In many online notes, the kernel is always a matrix (or tensor) of integers.

However, when I get the weights (or kernel) from codes below, they are floats.

a = nn.Conv2d(1, 1, kernel_size=(3,3), padding=1)

I would like to have guidance on why are the weights floats and not integers.
Or have I got my fundamentals wrong.

Many thanks for any help given.

I have tried Googling as follows on this topic but I could not get any results on it.

  • “why weights in PyTorch Conv2d floats and not integers?”
  • “why kernel in PyTorch Conv2d floats and not integers?”

The weight matrix, when initialised, is made up of floats only. Integers are limiting, as floats can represent a much wider range of values with a greater degree of accuracy. (Answer from google)
The values, even if they were integers, won’t stay integers for long anyways, as while calculating gradients and stuff, they will eventually turn into floats anyways, during the back propagation process.

1 Like

Integer values are not differentiable in PyTorch as they are discrete. Of course floating point values are also limited by their dtype but we treat them as continuous values.
Examples use Integer values as these are easier to understand in manual computations.

1 Like

Oh I see. Thank you for the explanation.

For a layman like me, it was very confusing as most of the online notes have the weights in integers, not floats.

On this note, I realised that I have made an error in the title of my post. It should be kernel SAME AS filter. Weights are the values inside the kernel or filter.

Thank you once again.

Yes. Your explanation is clear. It makes sense now. Using integers to illustrate the calculations will make it easier for a layman like me to visualise and understand.

Thank you very much for your help.