NoiseTunnel: CUDA out of memory when trying out

I was trying out captum. I am getting CUDA out of memory when I try NoiseTunnel option in captum.

Can someone help in solving where I am going wrong…

Your device might not have enough memory and thus you are seeing the OOM issue.
Could you explain, what noise_tunnel does and could you try to lower the batch size or number of samples you are passing to this method?

I am using GeForce GTX 1070 Ti, which has 8 GB of memory. I was just trying out the example here. I am using just one image.

Could you check the memory usage via nvidia-smi and make sure the device is empty and stop other processes, if necessary?
I haven’t tried the code, so I’m not sure how large the expected memory footprint is.

I saw that it as using just around 300-400 MB before running this line. And on execution, it went out of memory.

@nareshr8, you can reduce n_samples to limit the number of perturbed examples and n_steps to reduce the number of integral approximation steps.
With our recent improvements (the PR got merged a couple of days ago) you should be able to adjust internal_batch_size and run IG for very large number of steps.

3 Likes

Thanks a lot @Narine it worked for me. Thanks.

1 Like