Since you already had success with generating in the discrete domain, I’ve a question for you: How do you evaluate the quality of the generated signals?
That’s a good question and one I don’t have a final solution yet. Right now I remap the predicted class to the midpoint of its (continuous) boundaries and use that as a prediction, and then simply measure the error with the testing set. I think using the actual average might be more accurate but haven’t tried it yet. I’m curious to see how you do it too.
In “A Clockwork RNN”
they use LSTMs for memorizing a timeseries. They do not discretize the output, they only scale it. But they mention something about “[…] initialize the bias of the forget gates to a high value (5 in this case) to encourage the long-term memory”.
Just as a pointer. I’m not doing anything with timeseries, but maybe it helps.
Thanks for your great post @osm3000.
I’m running into a similar issue where I need to learn multiple random variables which are not independent.
Maybe you could try to first learn P(R1 | h) then sample R1 and learn P(R2 | R1, h)?
-
Why is the input and target built on opposite indexes? I mean the -1 and 1 in the following lines
input = Variable(torch.from_numpy(data[3:, :-1]), requires_grad=False)
target = Variable(torch.from_numpy(data[3:, 1:]), requires_grad=False) -
I’m trying to modify the example so the result would be the a prediction not of the future sine values but rather the angle in radians that caused the specific sine value. Doing this as an exercise. the point is that sin values are ambiguous since the same value can have different originating angles in different quadrants, thus the need to know what was the sin value before the current one in-order to predict the future values.
I thought it can be a good time series classification example. I save the orignal angle values used int he generator script
import numpy as np
import torch
np.random.seed(2)
T = 20
L = 1000
N = 100
x = np.empty((N, L), ‘int64’)
x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)
vals = x / 1.0 / T
data = np.sin(vals).astype(‘float64’)
save the labels to be used as target values later on
torch.save(vals,open(‘labels.pt’,‘wb’))
torch.save(data, open(‘traindata.pt’, ‘wb’))
…and then i use them as target in the train script after normalizing them to 0-2*pi values
“”“
input value between 0 - 360 deg
”""
def deg2norm(self,deg):
#return ((deg % (2 * math.pi)) - math.pi) / math.pi
return deg % (2 * math.pi)
"""
Create a vector of the expected result who values are a notion of 0-360 degrees
Due to ML constraints the values need to be in the range of -1..1
"""
def createTarget(self, data):
# Push all the values intot he positive spectrum of 0 - 360
# From angle perspective
delta = (round(abs(np.min(data[:]))) + 1) * (2 * math.pi)
val = data + delta
return self.deg2norm(val)
labels = torch.load('labels.pt')
labelsTarget = seq.createTarget(labels)
target = Variable(torch.from_numpy(labelsTarget[3:]), requires_grad=False)
alas the net prediction fails although the loss function results are OK
Any idea ?