Distributed Dataparallel check if working

I’m not sure how to check if distributed data parallel is working. I saw some sources that said you have to manually split the input data, but I didn’t do it and according to nvidia-smi both of my GPUs are being used.

How can I make sure distributed data parallell is collecting gradients to one model and not training two separate models?