Training of FCRN for depth prediction on my own dataset

Hello everyone, I am using pytorch implementation of Deeper Depth Prediction by laina.
Actually I have prepared my own data set of indoor scene in my environment and want to train model on that. I am freezing all other layers except for the up projection blocks initializing with pretrained weights and the result is not so good. There is fixed pattern of squares on depth predicted. Even I trained it on as small data set as 600 images and achieved 82 percent accuracy but the results were not good visually.
I do not know the reason of that maybe you can suggest me something.
Additionally the accuracy is not improving beyond 82 percent, Can you give me any suggestions how to improve. I have to use the depth for SLAM which requires greater accuracy. I want it to perform good on my data set of around 25000 images.
And the images I want to train are approximately 6k.
The pretrained weights with NYU are even performing better.
batch_size = 32
learning_rate = 1.0e-3
monentum = 0.9
weight_decay = 0.0005
num_epochs = 70
BerHu Loss
optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=learning_rate,
momentum=monentum, weight_decay=weight_decay)
and lr is halved after 10 epochs.

Validation depth image
Screenshot from 2019-06-27 21-37-28

rgb image
Screenshot from 2019-06-27 21-44-59