I am training a vit using nn.DataParallel
and I want to remove the effect when testing. Since I am calculating few matrices based on the predictions, I want the output to be in a single tensor and in a one gpu. Is there any way for mw to turn off the effect of nn.DataParallel? or does the model.eval() handel it without any extra help?
model.eval()
won’t help, but you could use the internal .module
to skip the data parallel wrapping via out = model.module(input)
.
worked. thank you ✺◟(^∇^)◞✺