In order to create polygonal masks I’m currently using Pillow’s ImageDraw to draw them. Then, I can get the corresponding numpy arrays and upload to GPU. But I’m thinking about creating them directly on the GPU using OpenGL, via, say, pyglet or glumpy. I found somewhere else how to pass PyTorch tensors to CuPy using data_ptr()
and the current CUDA stream, and I wonder whether something along those lines can be used to “draw” to a PyTorch tensor using OpenGL. Does anyone know how to do that?
You can, but it’s going to require a bunch of work including C/C++ code.
-
You need render to a texture using pyglet or glumpy. Get the texture ID (an int)
-
Use the CUDA OpenGL API to register the texture: https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__OPENGL.html
-
Use the CUDA Graphics API to bind the resource to a pointer. https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__INTEROP.html#group__CUDART__INTEROP
-
Either cudaMemcpy that pointer into a Tensor’s data_ptr() or use torch::from_blob to wrap it as a Tensor.
You’ll also need to call the appropriate unregister functions in destruction and make sure that the texture is in a matching format as the Tensor (i.e. float32 vs. int8)
Could one avoid C/C++ coding for this with PyCUDA?
I don’t think PyCUDA exposes the necessary functions. You might be able to use the Python ctypes
library to call the functions, but it’s probably no easier than writing the actual C/C++ code.
You might be interested in this gist, which uses a CUDA memcpy via PyCUDA to go the other direction. Presumably you could use the same technique to go from a texture to PyTorch.
I’ve been following this thread and similar ones asking about device-to-device memory copy and interoperability with OpenGL and Cuda. I’ve also seen this great example @darknoon shared and was able to get it to run.
What I’m struggling to understand is at which point, given an OpenGL Texture or an existing Cuda array, do you take into consideration the data format the model is expecting and how the tensor needs to be constructed.
For example, YoloV3, which I’m trying to get to work on a GPU pipeline, expects a MCHW data format, but I don’t exactly understand how to find out the data format of my current Cuda surface or OpenGL texture, and how these can be copied over to the correct data format, or if that even matters?
Is there any update on this?
I’m working with deep reinforcement learning on the petting zoo environment, specifically the MPE particle system. I notice how the rendering function in the system uses pyglet and OpenGL. It uses the latter to create the buffer image and then copies it to a numpy array which is pretty slow. I was wondering if you could grab the array directly from the GPU with a torch tensor. Is it possible without accessing C++ code?