RNN Loss Calculation Quitting

Even if EPOCHS is not 1, the loop only runs once. A little bit of effort showed that python quits at the line loss = torch.tensor(0, dtype=torch.float, device=device) on the second run. Even more strangely, I tried adding del loss at the end of the loop and once that was run, python exited immediately. Another person trying running my code and it worked fine but his computer uses Linux and does not have a GPU. Does anyone know how to make the loop run for the full EPOCH times?

# main training loop
for i in range(EPOCHS):
    xs, ts, vs, ws = generateTrajectory()

    # move tensors in list to GPU
    # change dtype and add batch dimension
    xs = list(map(lambda x: x.unsqueeze(0).to(device, torch.float), xs))
    vs = list(map(lambda v: v.unsqueeze(0).to(device, torch.float), vs))
    # convert to 1x1 torch tensors
    ws = list(map(
        lambda w: torch.tensor([[w]], dtype=torch.float, device=device), ws))
    ts = list(map(
        lambda t: torch.tensor([[t]], dtype=torch.float, device=device), ts))

    superlist = zip(xs, ts, vs, ws)
    loss = torch.tensor(0, dtype=torch.float, device=device)

    # initialize hidden, cell vectors
    hidden, cell = gridnet.init_hidden_cell(vs[0], ws[0])

    for x, t, v, w in superlist:
        guess, hidden, cell = gridnet(v, w, hidden, cell)
        ground_truth = torch.cat([x, t], dim=1)
        loss += loss_fn(guess, ground_truth)

    print(loss)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

Can you show us the error you are getting? I tend to just set loss = 0 and that works fine

I don’t get an error. It just finishes before its suppose to.

Hi, I have the same problem lol.
So have u solved it?

Not sure how I solved it. After a while my code just started working correctly. It’s possible the 0.4.1 release fixed something for me.