Tesla k40c vs Nvidia Titan X for pytorch

I’m trying to train my model in pytorch. With the same settings, Tesla k40c took 0.46 sec for an iteration whereas Nvidia Titan X took 0.18 sec. Both of them are 12 gb GPUs. Is this because of performance gap between the gpus?

1 Like

Yes, the Titan X (if it’s the Pascal version) is much faster than the K40. That performance difference doesn’t surprise me at all, and you can get even bigger gaps between Kepler and Pascal (I’ve seen up to 6x) if you’re using FP16 or your task is very heavy on memory bandwidth.

2 Likes

Agreed with comments of James Bradbury. If you are building/buying your own system look at the Titan. If you using a cloud-pricer, AWS uses K40, although you can use multiple GPU depending on the P2 instance size you are using. It is my understanding that AWS will be supporting the NVIDIA V100 this Fall. The performance should be at least as good as the Titan X, if not better (https://devblogs.nvidia.com/parallelforall/inside-volta/). Disclosure - I work for AWS.

Nick

1 Like

Oops. My mistake. The P2 utilizes the K80 today.

Yeah, depending on your application Volta might even be as much faster than Titan X as Titan X is over K40/K80.