Question Regarding Opacus Warning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions

I had a question regarding the warning thrown out by Opacus (it’s previously been discussed in other issues, such as this one: deprecation message for non-full backward hook · Issue #328 · pytorch/opacus · GitHub)

I’ve been testing an unmodified version of gpt-2 medium model with Opacus. I wanted to clarify that the warning doesn’t necessarily imply that the DP guarantee is affected by this (particularly as to my knowledge, the Opacus repo now supports the grad sample functions for all the layers in GPT-2, and I’ve validated the modules with the module validator). Any help regarding this or additional information would be great, thanks!