Fast Tensor access in python?

In lua torch, we can access a Tensor using luajit-FFI pointer as fast as in C. Do we have similar thing in pytorch?

1 Like

no. Python doesn’t have JITting.

However, you can use Cython, cython code is as concise and nice, and also has indexing and stuff.
For now, in your cython code, you can convert the Tensor to .numpy() and use cython to modify the Tensor.
Here’s an example of using cython to modify numpy arrays:

3 Likes

Thanks a lot for the reply and example. :slight_smile: I noticed the documentation mentioned cython and numba, this is really sweet.

another maybe more noob question: since we are talking about the access speed here, how’s the overhead of .numpy() operation?

the overhead of .numpy() is zero. We dont do any memory copies whatsoever, we just recast the Tensor C struct to a numpy array (the underlying memory is shared between Tensor and numpy array). so it’s free.

5 Likes

And if you feel comfortable with working with torch tensors, you can check out our ffi examples.

1 Like