How to free the pytorch transformers model from GPU memory

    from transformers import CTRLTokenizer, TFCTRLLMHeadModel
    tokenizer_ctrl = CTRLTokenizer.from_pretrained('ctrl', cache_dir='./cache', local_files_only=True)
    model_ctrl = TFCTRLLMHeadModel.from_pretrained('ctrl', cache_dir='./cache', local_files_only=True)
    gen_nlp  = pipeline("text-generation", model=model_ctrl, tokenizer=tokenizer_ctrl, device=args.gpu, return_full_text=False)

Hello, I want to remove the model model_ctrl from the GPU memory after using it, to free the memory it occupied.

What is the best way to to this ?
Thanks.

del all objects related to the model, i.e. the model itself and potentially optimizers, which could hole references to the parameters and if you want to clear the cached memory to allow other applications to use it, call torch.cuda.empty_cache().