Cafe Style Preprocessing in PyTorch for CUB_200

I am training a Bayesian neural network (based on this work) on CUB_200. My base architecture is this, a torchvision’s ResNet18. When training on this model my cross_entropy loss becomes nan in the first iteration. I am trying to debug it. While debugging, I noticed that my images are not normalized in a way that their values are between 0 and 1 or -1 and 1 but they have been normalized in a cafe style. This is the code:
For every image:

def _read_images_from_list(imagefile_list):
    imgs = []
    for imagefile in imagefile_list:
        #print("Reading img: ", imagefile)
        img = cv2.imread(imagefile).astype(np.float32)
        img = cv2.resize(img, (224, 224))
        # Convert RGB to BGR
        img_r, img_g, img_b = np.split(img, 3, axis=2)
        img = np.concatenate((img_b, img_g, img_r), axis=2)
        img -= np.array((103.94, 116.78, 123.68), dtype=np.float32)  # BGR mean
        #img -= np.array((123.68, 116.78, 103.94), dtype=np.float32) # RGB mean
        # HWC -> CHW, compatible with pytorch
        img = np.transpose(img, [2, 0, 1])

        imgs += [img]
    return imgs

and then later in the dataloader’s getitem() function they convert images to torch:

    def __getitem__(self, index):
        img = torch.from_numpy(self._images[index])
        target = self._labels[index]
        task_labels = self.task_labels
        return img, target, task_labels

I wonder why they chose this type of normalization and why the channels are changed in this way? since the model is a torch model, I assume RGB should be fine?
Thank you in advance!