I understand that both caffe2 and pytorch has support from facebook. I have a few questions about them:
- What are the main differences between both the libraries?
- Is one better than the other in certain aspects i.e., would we chose one over the other based on the problem domain?
- Would pytorch continue to be actively developed or is there a direction where it would be “merged” within caffe2?
Answers to most of your questions can be find in reddit.
I’ve seen this phrase “for research and for industrial” (nltk vs spacy) thrown around a lot. What is the difference between the two paradigms? If I work in industry why wouldn’t I want to use pytorch and vice versa. I think this is was mentioned by the author in the comments that the lines get blurred often:
Yangqing here. Caffe2 and PyTorch teams collaborate very closely to deliver the fastest deep learning applications as well as flexible research, as well as creating common building blocks for the deep learning community. We see Caffe2 as primarily a production option and Torch as a research option, but of course the line gets blurred sometimes and we bridge them very often.
We also adopt the idea of “unframework” - in the sense that we focus on building key blocks for AI. Gloo, NNPACK, and FAISS are great examples of these and they can be used by ANY deep learning frameworks. On top of these, we use lightweight frameworks such Caffe2 and PyTorch for extremely agile development in both research and products.
but I’m still not clear why and when should I use which one. Especially since there are python bindings available for caffe2 as well.
Yeah I also read an article on Caffe2 by NVIDIA with Facebook. I was wondering which one would be better, Caffe2 or PyTorch. And, if anybody is beginner like me, then which one should be preferred.
I am by no means an expert, but I think pytorch is a bit ahead than Caffe2 and it would be a good starting point. My question is I (and I would guess many others from reading the comments) can’t find a clear line of distinction between two libraries other than “caffe2 is for industry and pytorch is for research”. And I don’t really know what that means. I hope the developers of either (or both?) can pitch in.
Here is my personal opinion, I’m not an expert either.
- Caffe2 is superior in deploying because it can “CODE ONCE, RUN ANYWHERE”, It can be deployed in mobile, which is really appealing and it’s said to be much faster than other implementation.
- PyTorch is super elegant and flexible, it can be used like tensorfow (low level), it can also be used like keras(which reference a lot from the torch), and it could do what they can’t because it’s dynamic.
- In research, we need to experiment a lot, debug a lot, adjust parameter, try latest wired model architecture, build our own special network. PyTorch is super qualified and flexible for these tasks.
- when deploying, we care more about a robust universalizable scalable system. Amazon, Intel, Qualcomm, Nvidia all claims to support caffe2.
- the line gets blurred sometimes, caffe2 can be used for research, PyTorch could also be used for deploy.
- caffe2 are planning to share a lot of backends with Torch and PyTorch, Caffe2 Integration is one work in PyTorch(medium priority), we can export PyTorch nn.Module to caffe2 model in future.
- if you are a beginner want to learn deeplearning/framework, use PyTorch. You’ll enjoy it.
Is there any docker image which contains both of pytorch and caffe2?, I am little bit lazy to install caffe2 in my machine .
1 GB libTHC! What architectures are you compiling for?
root@b4764712536d:~/programs/pytorch# find / -name libTHC"*" | xargs ls -lrt
-rw-r--r-- 1 root root 273196496 Sep 12 02:37 /opt/conda/lib/python2.7/site-packages/torch/lib/libTHCUNN.so.1
-rw-r--r-- 1 root root 6178128 Sep 12 02:37 /opt/conda/lib/python2.7/site-packages/torch/lib/libTHCS.so.1
-rw-r--r-- 1 root root 1006545712 Sep 12 02:37 /opt/conda/lib/python2.7/site-packages/torch/lib/libTHC.so.1
i think @houseroad didn’t add the relevant binary flags, and Xcompress stuff. I’ll let him know.
With some compress flags, libTHC got reduced to around 260MB. The docker images have been updated.
It seems that Caffe 2 was merged into Python (At least some commits in GitHub shows so).
What does it mean?
Why did you do it?
From this statement nothing will change for PyTorch users.
The merge seems to be mainly beneficial for the development and engineering efforts in Caffe2 and PyTorch.
There is also a Caffe2 statement.
I’m excited by onnx as I’ve shifted my development to pytorch and production performance is a concern. I haven’t seen any benchmarking that compares tf-serving and caffe in terms of throughput on fixed hardware. I’d also love to see examples of caffe2 deployed in production using flask or some other serving mechanism, particularly in a digestable format like a blog post. Has anyone seen that sort of thing before? I’ve seen an example targeting AWS lambda but the performance benchmarks there weren’t anywhere close to what we’re getting with a dedicated tf-serving server.
Also wondering… Is there an equivalent caffe2 discussion forum like pytorch? I did a quick google and didn’t see anything that seemed solid like this forum.