Pytorch's handling of tensor shape vs tensorflow

I am having a hard time with tensorflow lately where it obliterates shape information when I do something like batch_flatten. I find this very annoying as its a lot of work to verify my calculations are functioning as they should instead of by some luck through broadcasting.

Has anyone experienced this in tf and know what i am talking about? Does pytorch improve up on this? If the answer is yes, this is reason enough to switch.

Thank you,
Isaac

Hi,

I am not very familiar with Tensorflow but I personnaly don’t like the automatic broadcasting as it tends (in my mind) to hide bugs or make ops do something different from what you meant.
Originally pytorch had no broadcasting whatsoever. Some of it has been added recently to ease up the use for orignal numpy users but it’s not as broadly used as Tensorflow. Advance indexing has also been added after the original release and thus you can everything without ever using it if you want.

Thanks for the response. This have been a helpful insight.