A few questions for core team about plans on Caffe2 + PyTorch1 co-existence:
Are there plans for making Caffe2 kernels available to PyTorch? There are many vision-specific kernels in Caffe2, not currently available in PyTorch (like detectron etc).
Are there plans to make Caffe2 and PyTorch kernels the same thing sharing the same ATen base ops? Or they are going to be independent and only share the Tensor format?
Do Caffe2 and PyTorch have different optimizing compilers?
Are there benchmarks on performance of Caffe2 versus PyTorch (jit / nojit)?
Thanks a lot Maybe some of these were already addressed at PyTorch conference
@smth @apaszke @fmassa
Thanks @smth for clarifications!
- Really hoping for more unification Since many detectron and other vision ops get pushed out faster in Caffe2 world
5. How tied is the TorchScript or torch.jit to Python? I mean what happens if Julia people write bindings for PyTorch, will they be able to make use of torch.jit? If you load a model trained somewhere else, will PyTorch be able to optimize its execution as well?
- Are there bigger plans for torch.contrib? Right now it seems to stay semi-secret
Thanks for the details @smth More PyTorch goodness for everyone
The original PyTorch 1.0 blog post was not very clear on these details, and I think it is important for users to understand general PyTorch directions, especially when TensorFlow develops eager mode, and heterogenous compilation technologies start off. Btw, any plans for TensorComprehensions / Halide tighter integration? (https://people.csail.mit.edu/tzumao/gradient_halide/gradient_halide.pdf)
Hi @soumith Are there any news about a big picture view on Caffe2 and PyTorch codebases? https://github.com/pytorch/pytorch/tree/master/caffe2/operators seems to still have a lot of ops and many of them match what PyTorch does, e.g. sin() : https://github.com/pytorch/pytorch/blob/master/caffe2/operators/sin_op.cu