RuntimeError trying to run Pytorch over GPU GeForce GTX 1650 Ti

Some struggle and hopefully a good answer:

if i put in print(x.shape) in the training loop it returns 20 of these shapes at every epochs:
torch.Size([X, 10, 695])
with X being different in all 20 instances and also changing every epochs.

if i put in print(mask.shape) in the training loop it returns 20 of these shapes at every epochs:
torch.Size([X, 10])

The 10 is the batchsize.
The 695 is the inputdimsize (which I was told are the unique classes in the dataset).

Thanks, but unfortunately these shapes are not working:

numClass = 3087
inputDimSize = 3087
embSize = 200
hiddenDimSize = 200
batchSize = 100
numLayers = 2
model = build_EHR_GRU(EHR_GRU, inputDimSize, hiddenDimSize, embSize, numClass, numLayers)

model.cuda()
x = torch.randn((1, 10, 695)).cuda()
mask = torch.randint(0, 2, (1, 10)).cuda()
out = model(x, mask)
> RuntimeError: mat1 dim 1 must match mat2 dim 0

Dear ptrblck.

with the combination of:
torch.set_default_tensor_type(‘torch.cuda.FloatTensor’)
model.to(device)

and your suggestion of:

    def init_hidden(self, batchSize):
        device = next(self.parameters()).device
        return torch.zeros(1, batchSize, hiddenDimSize, device=device)

It only needed one numpy to be altered in a torch and it works.
I am sure it is not efficient this way (as it is fenominally slow) BUT it works.
So thank you very much!!