How to implement dropout if I’m using LSTMCell instead of LSTM?
Let’s stick to the sine-wave example because my architecture is similar:
This file has been truncated.
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
self.lstm1 = nn.LSTMCell(1, 51)
self.lstm2 = nn.LSTMCell(51, 51)
self.linear = nn.Linear(51, 1)
def forward(self, input, future = 0):
outputs = 
h_t = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t = torch.zeros(input.size(0), 51, dtype=torch.double)
If I try to update weights by accessing them directly
self.lstmCell_1 = nn.LSTMCell(self.input_features, self.hidden_features)
self.dropout = nn.Dropout(p=0.1, inplace=True)
it results in an error.
I don’t want to implement my own LSTMCell, neither do I want to use LSTM, because I need predictions to be made further in time, not just the single next value, therefore I need to control the flow of data between LSTMCell units like in the sine-wave example.
Or maybe it is possible to realize the same sine-wave predictor with just LSTM without going into data flow control of LSTMCell?