PyTorch doesn't free GPU's memory of it gets aborted due to out-of-memory error

I have the same proble.
first, I open a python shell, type import torch
this time I open another ssh type watch nvidia-smi
second I return to first python shell, create a tensor(27,3,480,270) and move it to cuda
input = torch.rand(27,3,480,270).cuda()
the page of nvidia-smi change, and cuda memory increase
third, use ctrl+Z to quit python shell.
The cuda memory is not auto-free. The nvidia-smi page indicate the memory is still using.

The solution is you can use kill -9 <pid> to kill and free the cuda memory by hand.

I use Ubuntu 1604, python 3.5, pytorch 1.0.
Although the problem solved, it`s uncomfortable that the cuda memory can not automatically free