Gradients are 0, weights not updating

I’m initializing the bias and weight of my first convolutional to an arbitrary tensor and freezing them. Unfortunately this leads to the gradients of several other layers becoming all 0s. Does anyone know a solution to this?

This is the code I’m setting the layer’s weights with. I made sure that require_grad is True for all other layers.

m.weight=torch.nn.Parameter(torch.from_numpy(d).float())
nn.init.constant(m.bias.data, 0) #set bias=0
m.weight.require_grad=False
m.bias.require_grad=False

Thanks so much!