NaN in input tensors

in pytorch 1.7.1, relu does not seem to behave like this.

torch version: 1.7.1
tensor([[    nan,     nan, -0.2346],
        [    nan,     nan,  1.3086],
        [-0.0514, -0.6495, -0.5092]], grad_fn=<SliceBackward>)
tensor([[   nan,    nan, 0.0000],
        [   nan,    nan, 1.3086],
        [0.0000, 0.0000, 0.0000]], grad_fn=<SliceBackward>)

code:

import random

import torch
import numpy as np


# repro.
seed = 0
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)

print('torch version: {}'.format(torch.__version__))

x = torch.randn(1, 3, 10, 10)
x[0, 0, 0, 0] = torch.log(torch.tensor([-1.]))

m = torch.nn.Conv2d(3, 6, 3, 1, 1)
output = m(x)
print(output[0, 0, 0:3, 0:3])


r = torch.nn.ReLU()
output = r(output)
print(output[0, 0, 0:3, 0:3])

seems to have been changed in earlier version.
thanks