Dataset Structure for Graph Convolutional Networks

I am attempting to construct a GCN on my graph data using the code found here as a template.

When this code constructs Network with GCN layers:

class Net(torch.nn.Module):
    def __init__(self, dataset):
        super(Net, self).__init__()
        self.conv1 = GCNConv(dataset.num_node_features, 16)
        self.conv2 = GCNConv(16, dataset.num_classes)

    def forward(self, data):
        # HERE
        x, edge_index = data.x, data.edge_index

        x = self.conv1(x, edge_index)
        x = F.relu(x)
        x = F.dropout(x, training=self.training)
        x = self.conv2(x, edge_index)

        return F.log_softmax(x, dim=1)

The dataset this particular guide uses has all features stored in an m x n tensor called x, which is stored as an attribute of the dataset (dataset.x). I presume this corresponds to m examples and n features.

However, for my dataset, I have stored my features as separate tensor attributes rather than being aggregated into one attribute x. So e.g. dataset.get_all_tensor_attrs() will return all my features f1,f2,f3,... as well as my response y as their own separate attributes. Is this a valid way to store the features in my graph dataset? Or should I combine them all under one attribute (say x), and continue on with my journey?