Difference of objects created between torch.tensor() and torch.Tensor()

I tried using torch.tensor() and torch.Tensor() in the getitem method of a torch.utils.data.Dataset class. I observed severe memory leak when using torch.tensor() and the program finally crashes due to OOM, but everything is fine when using torch.Tensor().

I know that torch.tensor() will inference dtype automatically, but torch.Tensor can only create float32 tensors. But are the tensors created by the two functions allocated in exactly the same way? Is it that the objects created by torch.tensor() will not be deleted?

Could you post some code so that we could have a look?
The issue seems quite strange and you should rather use torch.tensor than torch.Tensor.