Didn't get any gpu using sbatch when submitting a job script through slurm

Here is my slurm job script. I requested 4 gpus and 1 computing node. My script is as follows:

#SBATCH --partition=gpu
#SBATCH --gres=gpu:4
#SBATCH --ntasks-per-gpu=12
#SBATCH --mem-per-gpu=40G
#SBATCH --time=0:15:00

module use /ifs/opt_cuda/modulefiles
module load python/gcc/3.10
module load cuda11.1/toolkit cuda11.1/blas cuda11.1/fft cudnn8.0-cuda11.1 tensorrt-cuda11.1/

# activate TF venv
source /ifs/groups/rweberGrp/venvs/py310-tf210/bin/activate

python -c "import torch;print(torch.cuda.device_count())"

so the torch.cuda.device_count() should give me 4 but actually the output is 0


I have no idea why this is happening. Anyone has any idea? Thanks

solved should use sbatch instead of bash