Custom augmentation for time series: significant increase in training time

Hello everyone,

I’m currently working on a project that involves training a neural network on time series data. To improve the network’s performance, I’m using custom data augmentation techniques that modify the time series in various ways.

While the augmentations have improved the network’s performance, I’m noticing a significant increase in training time when using this augmentation technique. Is there a more efficient way to run this function on the GPU, or is there a better way to implement this augmentation technique altogether? Moreover, is there a way to use torchvision transforms on time series? I’d appreciate any advice or guidance on this issue.

I’ll share the simpler augmentation I implemented and how I’m applying it on my __getitem__ function within my custom dataset. While using random_crop_resize below, my mean epoch time goes from 0.2 to 0.31 seconds while using a batch size of 256.

Custom augmentation

def random_crop_resize(o: torch.tensor, size: int, scale: tuple = (0.1, 1.0), mode: str = 'linear'):
    seq_len = o.shape[-1]
    lambd = np.random.uniform(scale[0], scale[1])
    win_len = int(round(seq_len * lambd))
    if win_len == seq_len:
        if size == seq_len: return o
        _slice = slice(None)
        start = np.random.randint(0, seq_len - win_len)
        _slice = slice(start, start + win_len)
    return F.interpolate(o[..., _slice], size=size, mode=mode, align_corners=False)

Custom dataset __getitem__

 def __getitem__(self, index):
        serie = self.raw[index]
        shape = serie.shape

        # Augmentation
        aug = serie.view(1, shape[0], shape[1])
        aug = random_crop_resize(aug, size=shape[1], scale=(0.1, 1.0))

        aug = aug.view(shape[0], shape[1])
        return aug, self.bin_target[index],[index]