Bytearray decode in GPU

Hello, it’s my first time here. I’m fairly new to the library yet and I wonder if I can run an operation in the GPU.

I have a matrix of paths, which come in bytearray format. I wanted to convert them to strings in the GPU.

Is there a simple way to run the following method in pytorch? Or just running it without having to copy to the CPU?

strings = np.apply_along_axis(lambda x: x.tobytes().decode(), 1, input.cpu())

where input: torch.Tensor in a matrix shape with dtype uint8.

I tried adapting this snippet to CuPy, but it raises an error saying string cupy arrays are not yet implemented.