NN-generated tensor to image bytes

Hello all.

I am having some questions regarding my approach to generate real-time image from a neural network. It works quite well (for a portable macbook, but after 2 or 3 seconds after starting the fan gets much faster and I can get frame drops). The process is:

Step 1) I have a tensor with pixel values in the “result” variable. with the shape torch.Size([1, 1, 310, 310]) being 310x310 the size and with grad_fn=<PermuteBackward>
Step 2) then I create x = torchvision.utils.make_grid(result)
Step 3) then I create an image with img = F.to_pil_image(x)
Step 4) then I pass everything to bytes data = img.tobytes("raw", "RGBX", 0, -1) to be displayed

And I do this in real time to give display in a rendering system (for the moment a GUI). I confess this seems like a bit of an intensive approach to generate frame by frame and get a decent framerate without the computer starting to complain.

Any help?
Thanks