No, you don’t need to manually synchronize your code unless e.g. you want to profile the code or if you are using custom CUDA streams and want to synchronize the entire device. PyTorch will use the default stream and thus no explicit synchronizations are needed.
1 Like