Getting access to which device is computing which layer after wrapping model with DataParallel?


So this is an updated question:

How can I get access to which device and what gradient it is computing when using DataParallel?
How would I be able to print

"Device 1: "[ …]
"Device 2: " […]
For the same layer?


Bump. Updated question.

Would appreciate any help.

I’m not sure I’m understanding the question completely, but if you would like to print the current input and parameter device, you could add:

def forward(self, x):
    print(x.device, self.fc.weight.device, ...)

to your forward method.

1 Like