Hello everyone.
I come to ask for help with a project I’m doing. I’m trying to train a neural network that needs to minimize energy consumption and I decided to start in small blocks, right now I want it to keep the temperature constant inside a room.
Since I did not find a dataset that was adequate for my needs (and also because my tutor ask me to do so), I decided to take meteorological data from several years and simulate the passage of time. That’s because the weather outside have an higher influence on the inside of my house.
So, I’m trying to give 10 features as input to my neural network:
- external temperature,
- temperature forecast in 15 minutes,
- month, day and time (all 3 under sin and cos transformations, so 2 features each),
- the power that the boiler is applying at the moment
- and the internal temperature.
This is where the catch falls: at each i-th step, the feature representing the power is predicted via the 10 features at step i-1 and the temperature is then calculated with the newly predicted power.
I’ll try to explain myself better:
Time 0: 8 feature from weather, power of heat system (0W), internal temperature (10°C)
Time 1: 8 feature from weather, power predicted from time 0, int temp computed based on predicted power from time 0
…
Time N: 8 feature from weather, power predicted from time N-1, int temp computed based on predicted power from time N-1
So, It’s been days but I can’t come up with a stable solution that doesn’t brake the back propagation. When I have to feed the NN, I load the data in a tensor of shape (16, 96, 10), where 16 can be seen as days, 96 as 15 minutes interval (24 hours * 4) and 10 the features. Since I don’t have power or internal temp, I just use 0 as a value for the features.
And here is the code that summary the procedure but it’s wrong. Someone could help me? I tryied to find everything I can but nothing suit.I’m starting to think I’m really incompetent.
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=2, output_size=1):
super(LSTMModel, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
# LSTM Layer
self.lstm = nn.LSTM(input_size=10, hidden_size=96, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
self.relu = nn.ReLU()
# Fully connected layer
def forward(self, x, room):
# Forward pass through LSTM
hn = torch.zeros(self.num_layers, batch_size, self.hidden_size).to(device)
cn = torch.zeros(self.num_layers, batch_size, self.hidden_size).to(device)
for i in range(1,window_size):
lstm_input = x[:, i, :].unsqueeze(1)
lstm_out, (hn, cn) = self.lstm(lstm_input, (hn, cn))
lstm_out = self.relu(lstm_out)
output = self.fc(lstm_out)
output = output.squeeze(1).squeeze(1)
x[:, i, 9] = output
x[:,i,8] = room.temp_update(x[:,i,1],x[:,i,9],x[:,i-1,8])
return x[:,:,9]
And room is a class defined for using tensor operations:
class Room(nn.Module):
def __init__(self, x, y, z):
super().__init__()
self.volume = torch.tensor(x * y * z)
self.AWall1 = torch.tensor(x * z)
self.AWall2 = torch.tensor(y * z)
self.ABase = torch.tensor(x * y)
self.ATot = torch.tensor(2*(self.AWall1 + self.AWall2) + 2*self.ABase)
def temp_update(self, ext_temp, heat_power, int_temp):
C = self.volume * 1.2 * 1005
U = (0.5*(self.AWall1*2 + self.AWall2*2) + 0.6*self.ABase*2) / self.ATot
Q_loss = U * self.ATot * (int_temp - ext_temp)
delta_Q = (heat_power - Q_loss) * 900
return int_temp + (delta_Q / C)
p.s. I know that each day will begin with a power at 0 and temperature at 10°C, but for now is ok.