Connecting XND to PyTorch

Hey all,

As the original author of NumPy, I have been thinking about how to make improvements to interoperable array-computing in Python and other high-level languages for a long time. Two years ago, an approach to re-factoring NumPy that benefits multiple languages and frameworks occurred to me, and we have been making progress on this low-level re-factoring for a while now.

This work is nearing first viable release (there are developer releases now) under the XND name. XND is the name we have given to this typed-container framework that enables a generalized array concept to be used by other libraries and languages. It consists of a generalized type system (ndtypes), a generalized typed container (xnd) on any memory (cpu, gpu, disk), and a function system that allows registration of specialized kernels and general broadcasting.

The documentation is still nascent, but all of the code is on Github (under the Plures organization): https://github.com/plures

Development is being sponsored by Quansight (and Quansight customers right now), but our conversations about its development are public on Gitter: https://gitter.im/Plures/xnd-ml

We will be having a sprint at PyCon if anyone will be there at wants to meet with us.

Is there anyone in this community interested in working with us on connecting XND containers with PyTorch?

Thanks,

-Travis