Is it possible to send entire input and target to GPU instead of at individual step?

When I try to put enitre TainingData/Labels to GPU, the run seems to abruptly end without any error at line op = net(x). But when I put individual inputs and labels to GPU the run works perfectly. Can anyone explain why ?

Also, pytorch tutorial mentions “Remember that you will have to send the inputs and targets at every step to the GPU too:”. Why not entire input/target at the beginning, instead of at every step ?

#TainingData = torch.Tensor(TainingData).to("cuda:0")
#Labels = torch.Tensor(Labels).to("cuda:0")
for epoch in range(1000):
    for i in range(1, TainingData.shape[0]+1):
        x, y = TainingData[i-1].reshape(1,1,28,28), Labels[i-1]
        y = torch.tensor([y], dtype=torch.long).to("cuda:0")
        x = torch.Tensor(x).to("cuda:0")
        op = net(x)
        loss = criterion(op, y)

Hello rkp!

My guess would be that you’re getting a gpu out-of-memory error,
but, for some reason, not getting a clean error message.

I have done exactly this (with a smallish problem that fits in
the gpu).

I moved my entire training (and test) set, both inputs and labels,
to the gpu at the beginning, and then indexed into them with
randomly shuffled indices to “create” my batches. This worked
fine for me.

Good luck.

K. Frank

Thanks Frank !! Will try as suggested.