Speed considerations for operations on numpy arrays vs torch tensors

tl;dr: Are operations (multiplication/addition/slicing/cropping/clamping etc) faster on numpy arrays or torch tensors, or should I expect the same performance?

I just started using PyTorch (previously used Torch) and I’m wondering what the comparison is between numpy arrays and torch tensors when it comes to performance. I’m used to the many in-place operations that Torch supports. Numpy arrays seem to like copying stuff around. Any insights on this?

Example:
I have an image dataset from which I will be loading and performing multiple operations on (cropping, resizing, slicing, multiplication/addition etc). I only care about the end result of these operations so in-place operations would be preferred (at least I think).
These operations will be different for each iteration, so loading a preprocessed version of the set won’t do it.
I will be using opencv to load the images, so they will be numpy arrays once loaded.
Should i convert them to tensors and do the operations on the tensors or leave the conversion at the end?

honestly, it’s a mixed result. some ops are faster in numpy and others are faster in pytorch.

1 Like