I tried using torch.tensor() and torch.Tensor() in the getitem method of a torch.utils.data.Dataset class. I observed severe memory leak when using torch.tensor() and the program finally crashes due to OOM, but everything is fine when using torch.Tensor().
I know that torch.tensor() will inference dtype automatically, but torch.Tensor can only create float32 tensors. But are the tensors created by the two functions allocated in exactly the same way? Is it that the objects created by torch.tensor() will not be deleted?