The impact of 'del' during traing

Due to the limited GPU space, I want to to use ‘del’ to release the space storing the variable, to further avoid the error of ‘cuda out of memory’. I wonder whether this operation would cause some undesired consequence, like stopping the back propagation of the corresponding variable?

I would claim it depends what exactly you are deleting, as e.g. deleting a needed layer would of course raise an error:

model = models.resnet152()
x = torch.randn(1, 3, 224, 224)
out = model(x)
out.mean().backward()

del model.layer4[0].bn1

out = model(x)
> AttributeError: 'Bottleneck' object has no attribute 'bn1'

Generally, I would recommend to utilize the function scoping of Python, i.e. to use method to train() and evaluate() the model since all intermediate objects will be freed once the method is left.
Often additional memory is used if you are using a validation loop in the same method (e.g. main()) as the training loop and e.g. keep the training input data as well as the validation data alive.