Thank you to reply.
Before I came PyTorch1.0, I was using TensorFlow 1.X. Lately, TF have “eager execution mode” which is define by run. In TF2.0, eager execution mode is default, so I think the code will become PYTHONIC like PyTorch therefore the difference in usage feeling will disappear.
Then TF have a great JIT function which translator eager code into a function which works as TF graph internally named
tf.function at TF2.0.This JIT function makes eager code which much slower than PyTorch into faster than PyTorch. So, I tried to make pytorch faster with
torch.jit. (I think that Pyro uses
torch.jit for speed at Variational Inference API.)
torch.jit is not for speed, don’t we need
torch.jit at prototyping in research? If so, when considering production, what is difference between using caffe2 and