Hello,

I am a begginer of pytorch programming. And I am trying to write RNN codes using pytorch. I want to reach training loss less than 3.5. But my loss is near 9. Would you help me to solve this problem. I attach my code below. I appreciate any help.

def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):

“”"

Forward and backward propagation on the neural network

:param decoder: The PyTorch Module that holds the neural network

:param decoder_optimizer: The PyTorch optimizer for the neural network

:param criterion: The PyTorch loss function

:param inp: A batch of input to the neural network

:param target: The target output for the batch of input

:return: The loss and the latest hidden state Tensor

“”"

train_loss = 0.0

step = 0

print_every = 40

```
for i in range(print_every):
step += 1
if train_on_gpu:
inp, target = inp.cuda(), target.cuda()
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm_(rnn.parameters(), 5)
optimizer.step()
train_loss += loss.item()
if step == print_every:
return train_loss/print_every, hidden
```

def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=1):

batch_losses = []

```
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```