Training while inferencing on GPU

Hello.

I have an RL agent which I would like to train on my GPU. I’d like to split training and acting with this RL agent, in a way which allows the RL agent to train with experience replay while moving in its environment. However, torch.multiprocessing will only allow me to use the CPU to do this. I have tried to use the ‘spawn’ method like the Pytorch documentations suggest, but this causes my script to try to pickle the lambda functions within the nn.Modules, which are unpicklable. Is there any way I can get around this?

Thanks.