Pytorch Vs Tensorflow

Hello Moderators,

I love PyTorch from using it for the past 2 months but, suddenly my organization wants to move to Tensorflow as the new leadership suggests so. Can anyone, who has used both recently, suggest a few pointers in favor of Pytorch and a few cons of tensorflow so that I may defend my love?


I’m not the most qualified person to answer this, but IMO: Pytorchs Dynamic Computational Graph. Being able to print, adjust, debug, the code without this session BS makes easier to debug. However, TensorFlow created “Eager Execution” this summer to be more similar Pytorch. An argument is that Pytorch has always been like this, imperative and define-by-run.


I had this point in my mind but, after the introduction of Eager execution am finding it difficult to convince people with anything other than ease of use, at least for me.


For me, personally, these very forums are invaluable and one of the main reasons I recommend PyTorch.

Also, I like how PyTorch cares about its ecosystem. Depending on what you plan to do, projects listed there will help you quite a bit - for example the people at do many great things (and switched to PyTorch after doing Keras before) - and the other projects listed there do really awesome things, mostly in more specific areas.

On the technical side, PyTorch 1.0 will be released in a few days. My impression was that the upcoming Tensorflow 1.x to 2.0 transition - which brings many good features such as the first class eager execution support that you mention - may require updating your code.

That said, I’m sure that Tensorflow isn’t a bad choice either, and many bright people build great things with it.

Best regards



Is there really no downside to Tensorflow that PyTorch amends?

I would not think think there is a “you can do X in A but it’s 100% impossible in B”.

Of course, there are plenty of people having all sorts of opinions on PyTorch vs. Tensorflow or fastai (the library from vs. Keras, but I think many most people are just expressing their style preference.

How would you measure better? One way could be to look at benchmarks (e.g. , I think will have something during/after NeurIPS, so in a few days) and see what kind of models people use for competing there). Another popular measure is “what do research papers use”, I’m sure we’ll see something about that for NeurIPS, too.

Best regards



Pytorch being relatively new, most research papers have to be in Tensorflow. Maybe we shouldn’t look at that right now, as a metric. I agree with the rest.


I obviously cant take sides in this debate without coming off biased :slight_smile:

Pytorch being relatively new, most research papers have to be in Tensorflow.

Just wanted to point out that


There’s no doubt about its fan following in the research community and even I have a personal preference for PyTorch. But, people in the industry still express concerns about the differences in results when the same code runs on a gpu vs a tpu vs a cpu plus, it’s support in various cloud/infra platforms . Am yet to witness them for myself though and still, unable to fathom or validate their concerns.

But, at the end of the day, our leadership just made us to pick up Keras to write the code for production and I really wasn’t aware enough of the advantages PyTorch has on offer, to defend my interests and nullify their concerns :frowning:

I have just started pytorch , found hard than tensorflow.
Found that tensorflow objects , classes were so clean designed than pytorch, May be i am wrong.

Actually I was wrong, Tensorflow is more like in production enviroment. While Pytorch is more like in research enviroment. I like it.