I compared different methods of loading image from url. And using IPython.display.Image(url) is the fastest. So Is it possible to work with IPython.core.display.Image in dataset?
from IPython.display import Image
from torchvision.transforms import Resize
import torch
start=time.time()
a = IPython.display.Image(url)
a = Resize(512)(a)
stop=time.time()
print("durée: {:.2} min {:.4} sec".format((stop - start) // 60,(stop - start) % 60))
I got this error: img should be PIL Image. Got <class 'IPython.core.display.Image
TypeError Traceback (most recent call last)
<ipython-input-79-f4dfad644a8d> in <module>()
3 import torch
4 start=time.time()
----> 5 a = torch.Tensor(IPython.display.Image(url))
6
7 a = Resize(512)(a)
TypeError: new(): data must be a sequence (got Image)
I think it is preferred to use torch.tensor instead of using torch.Tensor directly. Tho i’m not familiar with IPython image, but there must some way that can convert it into 2/3 D array that can be further use to create tensors. PS: look at the Docs before asking .
torchvision.transforms work currently with PIL.Images or tensors, so you would have to convert IPython.display.Image to a PIL.Image somehow to use the transformations.