CUDA out of Memory if GPU is not "warmed up"?

I have a simple script:

import torch
LOSS_WEIGHTS = [1,2,3]
LOSS_WEIGHTS = torch.Tensor(LOSS_WEIGHTS)
LOSS_WEIGHTS = LOSS_WEIGHTS.to(0)

If I start the script while the computer is idle, I often get “CUDA error: out of memory”
The error somehow always goes away after I repeatedly relaunch the script several times. Does anyone know what can I do to prevent this error? Am I suppose to initialise my cuda device before starting the script?

pytorch 1.2.0 (Tried several versions)
cuda 10.1 (Also tried cuda 9)
Nvidia Driver 430
Hardware: 1 x GTX 1070
Ubuntu 18.04