Roadmap for torch and pytorch

Hi,
First, I did not see it coming … all those slides stating that lua is more efficient and easier to learn than python, this is an impressive move !
As a “native” torch user it’s both interesting and worrying, I have read the nice tutorial: introduction to pytorch for torchies, I see a lot of improvements it’s great. I have a few questions regarding pytorch and torch :

  • why did you choose to create this interface for python now, it did not seem a priority for the community (however I understand this will probably increase the adoption rate) ?
  • as a torch user, I invested some time to learn how lua and torch operate, I guess it will operate the same way in the near future, can you give us some insights about this ?
  • again as a torch user, would you see some big pros to move from torch to pytorch, without sacrifying performance for example ? maybe sharing your vision of how pytorch will be used at Facebook or Twitter ?

I want to be sure working with torch in the future is not a bad decision, by that I mean not switching to pytorch.
This is absolutely not a criticism of torch/pytorch, it is surely an opportunity for the community.
Thanks !

1 Like

hi pierre,

Torch is not going anywhere. PyTorch and Torch use the same C libraries that contain all the performance: TH, THC, THNN, THCUNN and they will continue to be shared.
We still and will have continued engineering on Torch itself, and we have no immediate plan to remove that.

We have also mentioned this to the folks developing the OpenNMT project: GitHub - OpenNMT/OpenNMT: Open Source Neural Machine Translation in Torch (deprecated)

Continue to develop things in whichever frontend you want (Torch or PyTorch) and you can be assured that the Lua side of things will be maintained.

Coming to " all those slides stating that lua is more efficient and easier to learn than python, "

we have done some thorough benchmarks and found this to be not the case, especially for the Torch ecosystem where we cross the C boundaries all the time (LuaJIT traces stop everytime you cross the C boundary).

gain as a torch user, would you see some big pros to move from torch to pytorch, without sacrifying performance for example ?

recurrent nets, weight sharing and memory usage will be big positives with PyTorch compared to Torch, while retaining the flexibility of interfacing with C and the current speed of Torch.

2 Likes

Hi all,

I’ve been taking a brief view at pytorch and I’m still unaware of the “major” differences regarding torch7 and pytorch. If you could briefly describe them it would be awesome (it could be used in a F.A.Q. as well :slight_smile: )

Also, I’ve tried to find a roadmap for the future (2017+) of any of these implementations on top of the torch computing framework and got not much. I do see some of the ideas proposed in the previous roadmap for torch coming to life (like tutorials, standardized datasets, a nice forum to hang out), so I must point out to the big elephant in the room: which one will take the focus on now? Pytorch or Torch7?

Don’t get me wrong, I much prefer python’s libraries because they are more standardized and mature than Lua’s, and Lua lacks many of the key functionality for some of these tasks (scipy, matplotlib, etc.). But this wasn’t that big of a deal due to some libraries like “fb.python” that allowed the use of some functionalities from python to be used with Lua, but I realise that python is a better choice for a research/developing platform compared to Lua.

Another thing I would like to clarify from the devs is this: what is the big advantage of pytorch regarding tensorflow? Or shall I say, what makes pytorch stand out of the rest of the “competition”?

Also, why not a gitter chat like torch7? WTH man! :sob:

2 Likes

Hi, could you further elaborate on “memory usage will be big positives with PyTorch compared to Torch”, any benchmark/example scenario?

Hi,

You say retaining the current speed of Torch. From my takeaway of minpy, the dynamic definition of the graph in python would hurt speed, is that also true in pytorch.

And when would you do start working on benchmarks.

There is a slight impact on the perf (within 5% in most cases), but we know of a couple things that can speed things up further. Remember it’s still beta.

We’ll start adding benchmarks soon, but if you want to try things out right now, you can check the imagenet example, and compare it with fb.resnet.torch.

You can try running the examples. We’re usually seeing 30-50% memory usage improvements. Benchmarks coming soon.

For those of you interested, there is a detailed roadmap here : https://github.com/apaszke/pytorch-dist#timeline

That’s an old README, it has changed a bit since then.