Does increasing kernel size increase spatial invariance?

quick question would using bigger kernel sizes, provide more spatial invariance (even if its slight)?

what do you mean by “more spatial invariance”?

meaning the algorithm would be more capable of capturing changes in spatial location for a certain object in the image, if the kernel is larger. does that help?

I’m not sure why it should be called an invariance, and its behavior in this aspect should depend on the task and other formulation you have. However, the general consensus on larger kernel size is that it allows capturing a larger context since it increase the receptive field for an output value.