Docker vs source for inferencing at the edge in real time application

Hello!

I have a model that I need to run in real time scenarios using a Jetson. In my computer it runs very fast with JIT and now is time to move on a Jetson for production.

I wonder if running this model through using a Docker Image will be as fast as running it from source, installing all the libs from scratch.

Thank you!

1 Like

I think the speed difference is small enough for the main question to be which gets you better support / a better workflow for your life-cycle, so likely you want to go with what NVidia recommends unless you have some other support arrangement.

Iā€™m saying this as someone who always builds his own PyTorch from source and does offer support for bespoke installations on a commercial basis.

Best regards

Thomas