Parallel a style transfer code on multiple GPUs

The style transfer demo code at Neural Transfer Using PyTorch — PyTorch Tutorials 2.0.1+cu117 documentation works fine for me, but I’m wondering what if I have a high-resolution image that cannot fit in a single GPU. I don’t think the dataparallel works for this situation since it works for models. Is there any way to store the image tensor in multiple GPUs and still have the gradient back together?