How to manage sparseconvtensor randomness?

Hi, … … I’m having some trouble to control the randomness of the model called Virconv(GitHub - hailanyi/VirConv: Virtual Sparse Convolution for Multimodal 3D Object Detection)

I checked and found that randomness started on the sparseconvtensor of spconv module.

Is there a reason why randomness is out of control when my environment is as follows?

ccimport 0.3.7
certifi 2023.7.22
charset-normalizer 3.3.2
click 8.1.7
cloudpickle 2.2.1
cumm-cu111 0.2.9
cytoolz 0.12.0
dask 2023.6.0
easydict 1.11
fire 0.5.0
fsspec 2023.9.2
idna 3.4
imagecodecs 2023.1.23
imageio 2.31.4
importlib-metadata 6.0.0
lark 1.1.8
llvmlite 0.36.0
locket 1.0.0
networkx 3.1
numba 0.53.1
numpy 1.23.1
packaging 23.1
partd 1.4.1
pccm 0.3.4
pcdet 0.3.0+ca5ad85 /home/workspace/jisoocv/KITTI/VirConv
Pillow 10.0.1
pip 23.3
portalocker 2.8.2
prefetch-generator 1.0.3
protobuf 4.25.0
pybind11 2.11.1
PyWavelets 1.4.1
PyYAML 6.0.1
requests 2.31.0
scikit-image 0.19.3
scipy 1.11.3
setuptools 68.0.0
six 1.16.0
spconv-cu111 2.1.20
termcolor 2.3.0
tifffile 2023.4.12
toolz 0.12.0
torch 1.8.1+cu111
torch-scatter 2.0.8
torchaudio 0.8.1
torchvision 0.9.1+cu111
tqdm 4.66.1
typing_extensions 4.8.0
urllib3 2.1.0
wheel 0.41.2
zipp 3.11.0

pytorch 1.8.1
Ubuntu 18.04

with two A6000 gpus(I’ve tried to train on multi-GPU mode with distributed learning).

Are you seeing the same effect using pure PyTorch?
If not, you might want to follow up with the authors of this custom tensor class.

The pure pytorch seems ok in this environment,…
I have followed the custom tensor class for spconv(GitHub - traveller59/spconv: Spatial Sparse Convolution Library), but it seems no problem with how they use torch. I don’t know where the randomness came from…

This repository is not only using PyTorch but also implements custom backend operations as seen here, which is why I suggested checking these.