Error while using Conv2d

Hello.
I’m learning pytorch and I run into an error trying to use Conv2D onto the gpu.

import torch as trc
import os
os.environ["HSA_OVERRIDE_GFX_VERSION"]= "10.3.0"
gpu = trc.device("cuda" if trc.cuda.is_available() else "cpu")

ex1 = trc.zeros(1, 1, 5, 5)
ex1[0, 0, :, 2] = 1

conv1 = trc.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3)
res1 = conv1(ex1)
print(res1)  #shows result

ex2 = ex1.to(gpu)
conv2 = conv1.to(gpu)

res2 = conv2(ex2) #error here
print(res2)

When I run the operation on the cpu there is no problem. But trying to do the same on the gpu gives the following error:

MIOpen(HIP): Error [Compile] ‘hiprtcCompileProgram(prog.get(), c_options.size(), c_options.data())’ naive_conv.cpp: HIPRTC_ERROR_COMPILATION (6)
MIOpen(HIP): Error [BuildHip] HIPRTC status = HIPRTC_ERROR_COMPILATION (6), source file: naive_conv.cpp
MIOpen(HIP): Warning [BuildHip] /tmp/comgr-8c2e12/input/naive_conv.cpp:39:10: fatal error: ‘limits’ file not found
#include // std::numeric_limits
^~~~~~~~
1 error generated when compiling for gfx1030.
terminate called after throwing an instance of ‘miopen::Exception’
what(): /long_pathname_so_that_rpms_can_package_the_debug_info/data/driver/MLOpen/src/hipoc/hipoc_program.cpp:304: Code object build failed. Source: naive_conv.cpp
Aborted (core dumped)

I should clarify that I’m running an AMD gpu and the ROCm version of PyTorch on linux. So far everything I’ve tried has worked, and I’ve managed to train deep NN using the gpu.

Honestly, I have no idea what that error means. I don’t know if there’s something wrong with the code or if this is an issue related to PyTorch/ROCm/HIP/AMD.
If anyone could help me with this or at least point me in the right direction, I would appreciate it.

This github issue seems to have a similar error AMD Radeon RX 6800 - HIPRTC_ERROR_COMPILATION · Issue #1889 · RadeonOpenCompute/ROCm · GitHub, and it appears the solution is to install the missing libstdc++-12-dev package.

It fixed the problem.
Thank you very much!