UNet Training is not executed on GPU

Thanks for answering!
First of all - I’ve ran “nvidia-smi” before and after I ran my code and I did notice that “volatile GPU-Util” has incresed from 3% to 96% and stayed like that throughout the training proccess. also I noticed that most of the GPU RAM was being used (around 5.8G/6G) so maybe it is running on the GPU after all? the training took about 5-6 hours on a data set of 5000 images.
my batch size is 1 and i’m not sure how I should check if my GPU can actualy fit these numbers.
How do I do it?