Normalization of 3d medical images

Hi! I have a inage dataset consisting of 71 mri images with dim : 180x192x192 containg flair and t1 of the brain. The goal is to segment multiple sclerosis. I have 11gpu ram, and the approach i am going for is 2d encoder decoder architecture.

Currently i am sending in a batch og 64 image slices. Each image slice has the dimension 3x192x192. Where the channel is [left neighbour, image of interest, right neigbour].

My question is: should i normalize the volumes or per slice (normalizing 3 channels)?

Another question: since i have both t1 and flair images. Can i first train the images on flair and then use the saved weights to train on t1?