Leaky ReLU only gives negative values or values close to 0

I have normalized my data using zero mean and variance of one. I want to do a sanity check to see if I can overfit a small amount of data. I created one fully connected layer with input size of 1024 and output size of 9. I use torch.nn.init.kaiming_uniform_
to initialize the weights and I also use batch norm before calling the Leaky ReLU activation function for this layer.

I have been training for almost 50 epochs with batch size of 32 and total batches are just 100. However, my network’s output is either negative or close to zero. I am trying to regress to label values that lie between 0 and 1. Before I tried using ReLU (with same weight initialization as I mentioned now) but the output, in that case, was almost always entirely zero. What could I do here to steer my output towards more positive values?

Try to remove the last activation function and use the output of the last linear layer as the prediction (if that’s not already the case).
If that doesn’t work, try to play around with more hyperparamters, such as the learning rate etc.