Warnings.warn(.. prints after epoch

Hi, when I train my network (self.network) with the following scheme, pytorch prints out a warning warning.warn(...The default behaviour for interpolate/upsample with float scale_factor changed in 1.6.0... at the beginning of the training loop, which is totally OK with me.

I have two versions of self.network.
For one version of self.network, the warning prints only at the very begging of the training loop.

But strangely, for the other version of self.network, the warning is additionally printed out for every beginning of the validation loop during the training.

The differences between the two networks are the network architecture and additional input that I give for the second version of self.network.

The main reason I’m asking this question is that the second version undergoes out of GPU memory problem during the training. (If I just do the validation, the two networks take almost the same memory, and there is no out of GPU memory problem.) I strongly believe it is somehow related to the reason why the warning prints out for every validation loop.

It would be very helpful if you guys share any clues on this problem.
Thank you!

self.state = ['train', 'valid']

for epoch in self.epoch_range:
    for state in self.states:
        if state == 'train':
            self.network.train()
            self.iteration(epoch, state)
        elif state == 'valid':
            self.network.eval()
            with torch.no_grad():
                self.iteration(epoch, state)

def iteration(self, epoch, state):
    is_train = True if state == 'train' else False
    data_loader = self.data_loader_train if is_train else self.data_loader_eval

    for inputs in data_loader:
        result = self.network(inputs, is_train)