Hi! When using a pre_transform on my dataset, the resulting graphs have different feature dimensions than when the transform is applied to some graph (i.e. outside the pre_transform of a dataset). The same happens when considering batches, the pre_transformed batches have an extra feature dimension, where the transform applied by the dataloader real-time does not have this extra dimension.
def some_transform(graph: Data) -> Data: feature_dimension = graph.x.shape feature = torch.ones((1, feature_dimension)) graph.extra_feature = feature return graph dataset = TUDataset('./datasets/TUDataset/PROTEINS', name='PROTEINS', pre_transform=some_transform) dataset = dataset.shuffle() loader = DataLoader(dataset, batch_size=batch_size, shuffle=True) for batch in train_loader: print(batch.x.shape) print(batch.extra_feature.shape)
>>> torch.Size([4313, 3]) >>> torch.Size([128, 4])
Why could this be the case? Thanks in advance!