Hello everyone,
I’m trying to figure out a way to add an implementation of time2vec (GitHub - ojus1/Time2Vec-PyTorch: Reproducing the paper: "Time2Vec: Learning a Vector Representation of Time" - https://arxiv.org/pdf/1907.05321.pdf) as a layer prior to the architecture of the Temporal-Fusion Transformer (pytorch_forecasting.models.temporal_fusion_transformer — pytorch-forecasting documentation). My idea is to take the time info of my timeseries, pass it through the time2vec network and the output of that is a feature to the Temporal-Fusion Transformer (TFT). But I also want everything to be part of the same architecture.
I’m honestly very new to Pytorch. It seems that pytorch-forecasting’s implementation of tft uses a LightningModule, but I have no idea how to integrate time2vec into it. If anyone could help me in this endeavour, It’d be much appreciated.
You could create your own architecture that uses both classes as layers.
Here is a really dumb example. You would only need to define the MyArchitecture
class, passing all of the relevant parameters to create both your T2V and TFT modules, making sure that they are compatible in the middle.
I don`t think there is a problem in defining MyArchitecture
either as nn.Module
or as pl.LightningModule
import torch
import pytorch_lightning as pl
class Time2Vec(torch.nn.Module):
def __init__(self, in_t2v=10, out_t2v=10):
super().__init__()
self.fc = torch.nn.Linear(in_t2v, out_t2v)
def forward(self, x):
return self.fc(x)
class TemporalFusionTransformer(pl.LightningModule):
def __init__(self, in_tft=10, out_tft=2):
super().__init__()
self.fc = torch.nn.Linear(in_tft, out_tft)
def forward(self, x):
return self.fc(x)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
class MyArchitecture(pl.LightningModule):
def __init__(self, in_f=10, middle=10, out_f=10):
super().__init__()
self.t2v = Time2Vec(in_f, middle)
self.tft = TemporalFusionTransformer(middle, out_f)
def forward(self, x):
return self.tft(self.t2v(x))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
dim = 10
my_imp = MyArchitecture(in_f=dim, middle=30, out_f=2)
input = torch.rand(20, dim)
print(my_imp(input))