Trouble using cuda inside of Docker

Hello, I am trying to get pytorch running inside docker, but not having much luck. Here is some terminal output first from the host machine, then immediately after running inside of a base nvidia/cuda docker image:

$ nvidia-smi
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
$ nvidia-docker run --rm -it nvidia/cuda:10.2-base-ubuntu18.04 /bin/bash
# nvidia-smi
| NVIDIA-SMI 440.44       Driver Version: 440.44       CUDA Version: 10.2     |
# apt-get update && apt-get install -y python3 python3-pip && pip3 install torch torchvision
# python3 -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.6.0
Is debug build: No
CUDA used to build PyTorch: 10.2

OS: Ubuntu 18.04.5 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: Could not collect

Python version: 3.6
Is CUDA available: No
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 2080 SUPER
Nvidia driver version: 440.44
cuDNN version: Could not collect

Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] torch==1.6.0
[pip3] torchvision==0.7.0
[conda] Could not collect

I am using prepackaged nvidia/cuda docker containers and a minimal python install, so I’m not sure what could be going wrong. All of the problems I have found other people have so far have caused nvidia-smi to not work properly, but in my case, it is working fine, yet torch still says CUDA is not available.

I wiped the OS, re-installed everything from scratch and it is now working.