How to prevent leak when using .cuda() on temporary view and replacing original variable?

n = 10
x = torch.zeros(n)
x = x[:n//2].cuda()

at this point, we have leaked memory on the cpu

I am using this gist: A simple Pytorch memory usages profiler · GitHub
to profile. It basically traverses objects accessible by gc and checks where they are.

x points to a buffer on gpu


In [5]: mem_report()                                               
=================================================================
Element type    Size                    Used MEM(MBytes)
Storage on GPU
-----------------------------------------------------------------
Tensor          (5,)            0.00
-----------------------------------------------------------------
Total Tensors: 5        Used Memory Space: 0.00 MBytes
-----------------------------------------------------------------
Storage on CPU
-----------------------------------------------------------------
Tensor          (10,)           0.00
-----------------------------------------------------------------
Total Tensors: 10       Used Memory Space: 0.00 MBytes
-----------------------------------------------------------------
=================================================================

I cannot reproduce this behavior and see:
Before running your code:

=================================================================
Element type	Size			Used MEM(MBytes)
Storage on GPU
-----------------------------------------------------------------
-----------------------------------------------------------------
Total Tensors: 0 	Used Memory Space: 0.00 MBytes
-----------------------------------------------------------------
Storage on CPU
-----------------------------------------------------------------
-----------------------------------------------------------------
Total Tensors: 0 	Used Memory Space: 0.00 MBytes
-----------------------------------------------------------------
=================================================================

After:

=================================================================
Element type	Size			Used MEM(MBytes)
Storage on GPU
-----------------------------------------------------------------
Tensor		(5,)		0.00
-----------------------------------------------------------------
Total Tensors: 5 	Used Memory Space: 0.00 MBytes
-----------------------------------------------------------------
Storage on CPU
-----------------------------------------------------------------
-----------------------------------------------------------------
Total Tensors: 0 	Used Memory Space: 0.00 MBytes
-----------------------------------------------------------------
=================================================================

Sorry I think I was an in an interactive ipython session that somehow had another reference to the old variable.

Thank you for double checking this. Sorry for the false alarm.