for self.batch_idx, batch in enumerate(self.train_loader_x):
I want to use dassl to train my model, but when I executed to the code above, the whole model was stuck without raising any error.
I tried to set num_workers=0 as many invitation introduced, but the result seems the same.
I submitted my work to slurm with requirements as follows:
#SBATCH -G 2 ### two gpus
#SBATCH -N 1 ###one node
#SBATCH -n 1 ###one progress
Could anyone has an impressive depth of knowledge in this field help me?