Local Inference of saved PyTorch models

I am interested in performing local (no server or cloud) inference of saved PyTorch models that I can “deploy” (for example, using PyInstaller) to machines that do not have any dependencies. Can someone point me in the correct direction for this?

My goal is to be able create a command-line executable that I can share with collaborators who do not have any idea about Python/programming/Docker/containers and would simply like to infer on the models I send (with the correct data, of course).

Requirements from end product:

  • No “cloud” or remote processing - data privacy is a critical requirement and processing needs to be local on Windows and/or Linux machines using CPU (GPU would be a huge plus but not required)
  • No need for user to install anything (no Python/Docker engine/server/etc.) as this needs to be catered to a non-tech audience

Please comment if further clarifications are needed.

Thanks in advance.

Sounds like you just need to create some CLI tool which you can create with argparse. I like typer but that has a few more dependencies.

Hi @sarthakpati,
I have a similar situation. Did you solve the issue? Could you please share the soloution?
Thank you.

Apologies for the late response. I unfortunately do not have a solution for this, yet.