Ndefined symbol: _ZNK2at6Tensor6deviceEv when using c++ cuda extensions

I’m building some custom ops with cuda and c++.
When I build my extensions in pytorch 1.8.1+cu102 it works properly.
Here is the code from my setup.py:

from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension

if __name__ == '__main__':
    setup(
    name='my_extension',
    ext_modules=[
        CUDAExtension(
        'my_extension',
        sources=['src/my_exteinsion.cpp','src/my_extension.cu'],
    )
    ],
    cmdclass={
        'build_ext': BuildExtension
    }
)

When I try to build with pytorch 1.10.0+cu102 it builds the extension but when i try to import my extension it gives me the error:

Traceback (most recent call last):
  File "/workplace/test_custom_op.py", line 1, in <module>
    import my_extension
ImportError: /home/appuser/.local/lib/python3.6/site-packages/my_extension-0.0.0-py3.6-linux-x86_64.egg/mod_dcn_op_v2.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor6deviceEv

I tryied to find the similar issues, but they were unresolved:

I’ve read the tutorial that describes how to build the extensions:
https://pytorch.org/tutorials/advanced/cpp_extension.html
Here is the quote

A small note on compilers: Due to ABI versioning issues, the compiler you use to build your C++ extension must be ABI-compatible with the compiler PyTorch was built with. In practice, this means that you must use GCC version 4.9 and above on Linux.

So as I understand ther error could be occurred due to compiler version mismatch. I check the gcc version and it is v. 7.5.0
Am I write about my suggestion? If so what version I should use and if I’m wrong what can be the source of error?