Changing Learning Rate or Kernel Size causes loss to tend towards infinity (nan)

I’ve created a very simple CNN with one convolution layer, and one linear layer.
Whenever I use lr = 0.01 and kernel_size = 1, it works, but any other values