The accuracy declined after using pytorch apex

I found the accuracy declined by about 3%~5%.What’s more,the loss value often occurs NaN after using pytorch apex.Do you know what cause them? Thanks in advance.
Python Version:3.65
Pytorch Version:1.6

We recommend to use the native mixed-precision training utility via torch.cuda.amp, as the future development will be focused on it. The docs can be found here.

I refer to the docs.Is there any difference?

Better compatibility with PyTorch, future improvements etc. as described here and here.

I using torch.cuda.amp in pytorch.I just added the automatic mixed precision training code described in here in a normal code without any data parallel.However,I got a very different result,and even loss is NaN.I think the accuracy should change a little,rather than a large change.

If the loss is getting a NaN value, I assume your model output is also already containing invalid values.
Could you post the model you are using as well as the stats of your input data (min., max., mean) so that we could try to reproduce this issue?

OK ,if I have some free time,thanks.