RNN module in PyTorch

Hello all,

I am pretty new to PyTorch, and am trying to implement a prediction code starting with just an Elman Network (RNN module with 1 layer)

With the code I have written (given below), my loss function (MSE) does not decrease and keeps giving random values for all epochs.

x is the input, which is a pattern in the form of 2-D co-ordinates, and y is the prediction (output) I would like to make.

#Just parsing an input file
xa = pd.read_csv(‘two_circ.csv’)
xb = np.zeros((len(xa),2))
for i in range(len(xa)):
xb[i] = xa.values[i] #Input layer
x = torch.from_numpy(xb)[:,None,:].float()

#Build output matrix for prediction
ya = np.roll(xb,1,axis=0)
ya[0] = ya[1] #Output layer
y = torch.from_numpy(ya)[:,None,:].float()
n_inputs, n_neurons, n_outputs, batch_size, n_epochs = 2, 5, 2, len(xb), 300

rnn = nn.RNN(n_inputs, n_neurons)

loss = nn.MSELoss() #Selected loss function
optimizer = optim.Adam(rnn.parameters(), lr = 1e-1)

for i in range(n_epochs):
h0 = torch.randn(1, 1, n_neurons)

xnew = x[i][:,None,:]
hidden, h0 = rnn(xnew, h0)

hid = torch.squeeze(hidden,1)   #Remove one (the second) dimension
o = nn.Linear(n_neurons,n_outputs)  #Apply linear function to the hidden layer
out = o(hid)    #Gets a tensor with 2 outputs
a = nn.Tanh()
output = a(out)

error = loss(output, y[i])
optimizer.zero_grad()
error.backward(retain_graph=True)
optimizer.step()
    
print('Loss for Epoch '+str(i)+' is '+str(error))

I am specifically confused regarding the whole deal with Variable with the autograd function(which if I understand correctly, I don’t think is needed with the RNN module).

Thank you in advance