Multiple (numeric) Inputs in Neural Network for Classification

Hi!

I have been struggling with this code for a couple days. I couldn’t find many similar posts but the one’s I found have attributed to the code below.

I have 3 inputs that are three independent signals of a sensor. The rows represent a signal and the columns are values of that signal. On another file I have the target that is a column vector of 0 and 1’s. These 4 files are CSV.

I am having trouble passing three inputs to the network. From what I’ve gathered, it should occur in the forward section and then concatenated. The below code as is gives the error of:

mat1 and mat2 shapes cannot be multiplied (1080x20000 and 3x1)

import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd 
import torch.nn.functional as F
#from torch.autograd import Variable
import pandas as pd


# Import Data
Input1 = pd.read_csv(r'....')
Input2 = pd.read_csv(r'....')
Input3 = pd.read_csv(r'....')
Target = pd.read_csv(r'...')

# Convert to Tensor
Input1_tensor = torch.tensor(Input1.to_numpy())
Input2_tensor = torch.tensor(Input2.to_numpy())
Input3_tensor = torch.tensor(Input3.to_numpy())
Target_tensor = torch.tensor(Target.to_numpy())

# Transpose to have signal as columns instead of rows
input1 = torch.transpose(Input1_tensor, 0, 1)
input2 = torch.transpose(Input2_tensor, 0, 1)
input3 = torch.transpose(Input3_tensor, 0, 1)
y = torch.transpose(Target_tensor, 0, 1)

# Define the model
class Net(nn.Module):
    def __init__(self, num_inputs=3, num_outputs=1,hidden_dim=2):
        # Initialize super class
        super(Net, self).__init__()
        
        # Add hidden layer 
        self.layer1 = nn.Linear(num_inputs,hidden_dim)
        # Activation
        self.sigmoid = torch.nn.Sigmoid()
        # Add output layer
        self.layer2 = nn.Linear(hidden_dim,num_outputs)
        # Activation
        self.sigmoid = torch.nn.Sigmoid()
        

    def forward(self, x1, x2, x3):
        # implement the forward pass
        #x = F.relu(self.layer1(x))
        #x = F.sigmoid(self.layer2(x))
        
        
        in1 = self.layer1(x1)
        in2 = self.layer1(x2)
        in3 = self.layer1(x3)
                      
        xyz = torch.cat((in1,in2,in3),1)

        return xyz


# Network parameters
num_inputs = 3
num_hidden_layer_nodes = 100
num_outputs = 1

# Training parameters
num_epochs = 100 

# Construct our model by instantiating the class defined above
model = Net(num_inputs, num_hidden_layer_nodes, num_outputs)

# Define loss function
loss_function = nn.MSELoss(reduction='sum')

# Define optimizer
optimizer = optim.SGD(model.parameters(), lr=1e-4)

for t in range(num_epochs):

    # Forward pass: Compute predicted y by passing x to the model
    y_pred = model(input1, input2, input3)

    # Compute and print loss
    loss = loss_function(y_pred, y)
    print(t, loss.item())

    # Zero gradients, perform a backward pass, and update the weights.
    optimizer.zero_grad()

    # Calculate gradient using backward pass
    loss.backward()

    # Update model parameters (weights)
    optimizer.step()

I would greatly appreciate feedback!! (on this but any other thing related to the code is welcomed, i’m new!)

Based on the error message it seems one of the input tensors has a shape of [batch_size=1080, nb_features=20000] while self.layer1 seems to expect 3 input features.
Assuming all inputs have the same number of features, you could set num_inputs to 20000 and it should work.

Thank you for the reply ptrblck.

I do want it to have only 3 inputs (each input meaning a signal that is 1080 x 1). The way that I have formatted the forward is wrong?

EDIT: After changing to 20,000, I have the following
RuntimeError: The size of tensor a (3) must match the size of tensor b (20000) at non-singleton dimension 1

It’s at line:

loss = loss_function(y_pred, y)

where y_pred is tensor [1080, 3] and y is tensor [1, 20000]. Instead of y_pred being [1080,3] it should be a scalar value of 0 or 1 comparing to the respective index of y’s column…

What am I doing wrong :frowning: ?

I would need to know more about the shapes and what they represent.
E.g.

would mean that you are using a single sample (dim0 of y) while the prediction has a batch size of 1080, so could you explain what the desired batch size is, what the input tensor shapes are, and why you are transposing them, please?

Sure, the data is metrology measurements – height values from a scanned surface. Each row (total 20,000 samples) has 1,080 values of height. There are 3 files from 3 different sensors. But there are defects that occurs in all three sensors at the same time (in the same row index).

For example, row 450 from Data 1 will have a defect, then Data 2 and 3 will likely have a defect in the same 450th row. If only Data 1 has a defect and Data 2 and 3 do not have it, then it’s not a defect.

The desired output of the 3 inputs should be a binary 0 or 1. Or a probability that there is a defect 1 to compare to the single sample y (target) for the respective row.

I transposed because I had the understanding that input should be in a column vector?

I got it to work in Matlab (hopefully it’s not too frowned upon), it looked as follows:

3 inputs, 2 hidden layer, 1024 neurons, cross-entropy for performance, tangent sigmoid for hidden layer, and logsig for the output layer. This is my objective…

The expected input shape for linear layers is [batch_size, *, nb_features] where * denotes additional dimensions (which we will skip for now as I don’t think it would fit your use case).

No, don’t worry. :wink: Coming from the signal processing domain I’ve spent sme time with MATLAB.

I’m unsure how to interpret the first hidden layer, but I assume it concatenates the inputs in the feature dimension first and applies the weight matrix afterwards. Otherwise I don’t know how the 1024 output features are generated and would need more information about your MATLAB model (I’ve hardly used the neural network toolbox and switched to Python once I wanted to use it more :wink: ).

If I understand the figure correctly, then something like this should work:

import torch
import torch.nn as nn

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(1080*3, 1024)
        self.fc2 = nn.Linear(1024, 125)
        self.classifier = nn.Linear(125, 1)

    def forward(self, x1, x2, x3):
        # concatenate in feature dimension
        x = torch.cat((x1, x2, x3), dim=1) # x will have the shape [batch_size, 1080*3]
        x = torch.sigmoid(self.fc1(x))
        x = torch.sigmoid(self.fc2(x))
        x = self.classifier(x) # return logits
        return x


device = 'cuda'
x1, x2, x3 = torch.randn(100, 1080), torch.randn(100, 1080), torch.randn(100, 1080)
targets = torch.randint(0, 2, (100, 1)).float()
dataset = torch.utils.data.TensorDataset(x1, x2, x3, targets)
loader = torch.utils.data.DataLoader(dataset, batch_size=10, shuffle=True)
    
model = Net().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.BCEWithLogitsLoss()

# train
for epoch in range(200):
    for data1, data2, data3, target in loader:
        data1 = data1.to(device)
        data2 = data2.to(device) 
        data3 = data3.to(device) 
        target = target.to(device)
        optimizer.zero_grad()
        output = model(data1, data2, data3)
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()
    print('epoch {}, loss {:.6f}'.format(epoch, loss.item()))
    
# epoch 199, loss 0.000182

# get predictions
model.eval() # not needed here, but good practice 
output = model(data1, data2, data3) # use your validation dataset in your code!

# apply threshold on logits 
preds = (output > 0.0).float() 
# or use a probability threshold
# preds = (torch.sigmoid(output) > 0.5).float()

# model overfits perfectly on the random data
print(preds == target)
# tensor([[True],
#         [True],
#         [True],
#         [True],
#         [True],
#         [True],
#         [True],
#         [True],
#         [True],
#         [True]], device='cuda:0')

ptrblck, you are incredible. Thank you very much. It runs! Now just have to make sure it does the learning :smiley:

Thank you.