Seeking advice on building a model to analyse input vectors

Hi everyone,
I am new to all of this and have been using PyTorch for a couple weeks, and I have been set a task as a part of a project. This task is to create a deep learning model, or series of models, which when given a 3 x n input vector, can produce a output vector of 9 numbers which pertain to the initial input. For example, the first number of the output vector is the number of rows of the input vector, and the second number is the highest number in the input vector. From my research, I have struggled to find any problems similar to this and as a result I am not sure what to do. My approach thus far has been to create 9 models, each trained to output 1 of the 9 numbers. I have attempted this with simple fully-connected neural networks, however the models are performing very badly. If anybody could give me any advice as to how to classify and approach this problem or tell me what I am doing wrong it would be a massive help. I have attached code aimed at outputting the max number in the input vector. Thanks

import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import pandas as pd
import glob
from sklearn.model_selection import train_test_split
import os

print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))

class VectorDataset(Dataset):
def init(self, file_pairs, max_len=500):
self.file_pairs = file_pairs
self.max_len = max_len

``````def __len__(self):
return len(self.file_pairs)

def __getitem__(self, idx):
input_file, output_file = self.file_pairs[idx]

label = float(output_vector[0, 1])

input_vector = input_vector[:self.max_len, :]

``````

directory_path = “c:/Project/data_short”

input_file_paths = glob.glob(directory_path + ‘/*data.csv’) # Assuming input files have ‘data’ in their names
output_file_paths = [f.replace(‘data.csv’, ‘output.csv’) for f in input_file_paths]

file_pairs = [(input, output) for input, output in zip(input_file_paths, output_file_paths) if os.path.exists(output)]

def accuracy_fn(y_true, y_pred):
correct = torch.eq(y_true, y_pred.round()).sum().item()
acc = (correct / len(y_pred)) * 100
return acc

train_file_pairs, test_file_pairs = train_test_split(file_pairs, test_size=0.3, random_state=42)
max_len = 500
train_dataset = VectorDataset(train_file_pairs, max_len)
test_dataset = VectorDataset(test_file_pairs, max_len)

input_size = max_len * 3

class SimpleNN(nn.Module):
def init(self, input_size):
super(SimpleNN, self).init()
self.fc1 = nn.Linear(input_size, 128)
self.fc2 = nn.Linear(128, 256)
self.fc3 = nn.Linear(256, 1)

``````def forward(self, x):
x = x.view(x.size(0), -1)
x = torch.relu(self.fc1(x))
x = self.fc2(x)
x = self.fc3(x)
return x
``````

device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
model = SimpleNN(input_size).to(device)

criterion = nn.MSELoss()

num_epochs = 20

for epoch in range(num_epochs):
model.train()
running_loss = 0.0

``````for batch, (vectors, labels) in enumerate(train_dataloader):
vectors, labels = vectors.to(device), labels.to(device)

outputs = model(vectors)
loss = criterion(outputs.squeeze(), labels)

loss.backward()
optimizer.step()

running_loss += loss.item()

if (batch + 1) % 100 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Batch [{batch+1}/{len(train_dataloader)}], Loss: {loss.item():.4f}')

print(f'Epoch [{epoch+1}/{num_epochs}], Average Loss: {train_loss}')

test_loss, test_acc = 0,0
model.eval()
test_labels_list = []
predictions_list = []