Running a script on a GPU

Hi, I am trying to run my script on GPU but I am getting an error.
The data loading part goes like:

class_sample_count = np.array([len(np.where(y_train==t)[0]) for t in np.unique(y_train)])
weight = 1. / class_sample_count
samples_weight = np.array([weight[t] for t in y_train])
samples_weight = torch.from_numpy(samples_weight)
sampler = WeightedRandomSampler(samples_weight.type('torch.cuda.DoubleTensor'), len(samples_weight), replacement=True)

trainDataset =, torch.cuda.FloatTensor(y_train.astype(int)))
trainLoader = = trainDataset, batch_size=mb_size, shuffle=False, num_workers=1, sampler = sampler)

My model object is like:

    class AEE(nn.Module):
        def __init__(self):
            super(AEE, self).__init__()
            self.EnE = torch.nn.Sequential(
                nn.Linear(IE_dim, h_dim),
                nn.Linear(h_dim, Z_dim),
        def forward(self, x):
            output = self.EnE(x)
            return output
model= AEE()

I am getting this error in the for loop on my trainloader:

RuntimeError: CUDA error (3): initialization error

Any ideas?