1Mb gpu memory can't be relased!

While i run the code of FGSM, i find there is 1mb gpu memory can’t br relased in every loop.And available GPU memory is decreasing. My code is here:
def gen_adv(model, device, data, target, epsilon):

data, target = data.to(device), target.to(device)


data.requires_grad = True

# 1067mb  1068mb
output = model(data)
# 1202mb  1203mb

init_pred = output.max(1, keepdim=True)[1]
if init_pred.item() != target.item():
    return
 # 1202mb  1203mb
loss = F.nll_loss(output, target)
 # 1202mb  1203mb
model.zero_grad()
 # 1202mb  1203mb
loss.backward()
 # 1068mb  1069mb
data_grad = data.grad.data
 # 1068mb  1069mb
data = unnormalized_show(data)
 # 1068 mb 1069mb


perturbed_data = fgsm_attack(data, epsilon, data_grad)
perturbed_data = perturbed_data.cpu().detach().numpy()


perturbed_data = perturbed_data.reshape(3, 224, 224)
perturbed_data = np.transpose(perturbed_data, (1, 2, 0))

return perturbed_data

The meaning of the comment is the used gpu memory. After the first execution of the program, the GPU memory used has been increased from 1067mb to 1068mb. And after the second execution of the program, the GPU memory used has been increased from 1068mb to 1069mb.() This problem caused my program to run interrupted. GPU memory runs out quickly.QAQ.
How can I resolve this problems? Help me QAQ

Are you storing some tensors in e.g. a list without detaching them?
Also, could you use:

with torch.no_grad():
    data_grad = data.grad

instead of calling the .data attribute, as it might yield unwanted side effects.