Min requirements for torchserve

Morning Guys

What is the min requirements for deploying torchserve?
My current system isnt low spec but seems to be hanging every time i up the workers specs as follows.

Ubuntu 18.04
8gig mem
I5 6th gen
Ssd
2x 3060ti
Model maskrcnn
Compiled with cudo 11.6
No docker.

The maskrcnn model has some known issues where it hangs otherwise your minimum requirements seem fine, I use a less fancy machine than you

If you’re still getting an issue with other models then please open an issue here and tag me, would be happy to help GitHub - pytorch/serve: Serve, optimize and scale PyTorch models in production

Thanks for the reply… will give it a go. And try another model… any idea on how to speedup the inference when using torchserve? Im trying to use it on machine at the moment but performance seems to be severely limited by the number of workers? Not sure if its maybe because in using cud116?

Hard to say without more information but I’d start with checking how slow your model is outside of Torchserve - this performance guide may help serve/performance_guide.md at master · pytorch/serve · GitHub and our benchmark tool serve/benchmarks at master · pytorch/serve · GitHub