CUDA Best Practices for PyTorch

Curious about best practices.

  1. When should I use cuda for matrix operations and when should I not use it?
  2. Are cuda operations only suggested for large tensor multiplications?
  3. What is a reasonable size after which it is advantageous to convert to cuda tensors?
  4. Are there situations when one should not use cuda?
  5. What’s the best way to convert between cuda and standard tensors?
  6. Does sparsity affect the performance of cuda tensors?
  1. if your project only has small computations, use CPU, otherwise use CUDA
  2. Unless you are doing advanced code, stick to either using CPU for the entire project, or using CUDA for the entire project
  3. depends on operation, have to benchmark and see
  4. if model is very very small, they no point using CUDA
  5. .cuda() and .cpu().
  6. yes, sparse operations are usually faster on the CPU.
1 Like

You can run this and provide the output: