Should I add @autoncast() in sub-module with mutiple GPUs

Hi,
I am using automatic mixed precision with DataParallel in a single process.
I read the example https://pytorch.org/docs/stable/notes/amp_examples.html#dataparallel-in-a-single-process and it says the @autocast should be added in MyModel before forward.
My question is, should I add the @autocast in the subModel.
For example.
‘’’
MyModel(nn.Module)

self. conv1 = subModel(…)

@autocast()
def forward()

‘’’

cc AMP author @mcarilli

Anything that runs under autocast in a particular thread will have autocast enabled.

MyModel.forward is what DP runs in a side thread. If MyModel.forward is decorated with @autocast(), that takes care of enabling autocast for the side thread.

If subModel.forward runs within MyModel’s forward, you don’t need to additionally decorate subModel.forward.