Greetings,
My data consists of time-series samples with 100 steps, each containing 2 features. In other words, my data is shaped as (samples, steps, features)
.
The model I’m currently implementing works in TensorFlow, but I’m having trouble properly implementing it in PyTorch.
class KnownDetector(Model):
def __init__(self):
super(KnownDetector, self).__init__()
self.TCP = tf.keras.Sequential([
layers.Conv1D(filters=32, kernel_size=3, activation="relu", input_shape=(100, 2)), # 100 packets, 2 features
layers.MaxPool1D(pool_size=3, padding='same'),
layers.Conv1D(filters=32, kernel_size=3, activation="relu"), # 100 packets, 2 features
layers.MaxPool1D(pool_size=3, padding='same'),
layers.Conv1D(filters=32, kernel_size=3, activation="relu"), # 100 packets, 2 features
layers.Flatten(),
layers.Dense(128),
layers.Dense(num_classes, activation='softmax') # num_classes = 36 in this example
])
def call(self, x):
return self.TCP(x)
fx = KnownDetector()
fx.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer='adam', metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
fx.fit(train_data, train_labels, epochs=30, validation_data=(val_data, val_labels))
My understanding is that the input should be reshaped from (steps, input_dim)
to (input_dim, steps)
.
PyTorch equivalent:
class ModelKnown(torch.nn.Module):
def __init__(self):
super().__init__()
self.TCP = torch.nn.Sequential(torch.nn.Conv1d(in_channels=2, out_channels=32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.MaxPool1d(kernel_size=3),
torch.nn.Conv1d(in_channels=32, out_channels=32, kernel_size=3),
torch.nn.ReLU(),
torch.nn.MaxPool1d(kernel_size=3),
torch.nn.Conv1d(in_channels=32, out_channels=32, kernel_size=3),
torch.nn.Flatten(),
torch.nn.Linear(in_features=256, out_features=128),
torch.nn.Linear(in_features=128, out_features=36),
torch.nn.Softmax(1))
def forward(self, x):
return self.TCP(x)
PyTorch doesn’t have a compile the same way TF does, so I’ve been doing my best following pytorch documentation:
x = torch.from_numpy(data)
y = torch.from_numpy(labels.to_numpy())
x.requires_grad=True
# Construct our model by instantiating the class defined above
model = ModelKnown()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-6)
for t in range(20):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
This outputs:
0 3.5832407474517822
1 3.5832395553588867
... # and so on
- What am I doing incorrectly here?
- In TensorFlow, I’m passing data for evaluation, is there a way to do so for PyTorch?
Thanks in advance for any help!