Hi, I would like to improve my inference latency by having pytorch accelerated on AWS GPUs if possible. Any recommendations as to what type of instances can be used, that is, if available/supported? Cost effectiveness is a priority as well if choices given.
Thanks in advance.