FCNN seems like not optimize

I have two measured signals called ‘Cobs’ and ‘kernel’, the ‘kernel’ is system response and I want to get ‘Ctrue’ (original signal). I build a FCNN to solve this problem

I have a simple fully connected NN:

device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)

class DeconvolutionNN(nn.Module):
def init(self):
super(DeconvolutionNN, self).init()
self.layer1 = nn.Linear(149, 256)
self.layer2 = nn.Linear(256, 128)
self.layer3 = nn.Linear(128, 149)
self.relu = nn.ReLU()

def forward(self, x):
    x = self.relu(self.layer1(x))
    x = self.relu(self.layer2(x))
    x = self.layer3(x)  
    return x

From my understanding, the input will be ‘Cobs’ and the DeconvolutionNN will give me a random guess ‘Ctrue’. After that, I need to compute the loss between ‘Ctrue’ and ‘Cobs’, so I defined a personal L2 loss function, I called conv1d since the relationship between ‘Cture’, ‘Cobs’ and ‘kernel’ is: Cobs = Ctrue conv kernel:

I defined the Loss function like this:

def compute_loss(model, y_true, y_pred, kernel, device):

kernel_tensor = torch.tensor(kernel, dtype=torch.float32).view(1, 1, -1).to(device)
y_pred = y_pred.view(y_pred.shape[0], 1, -1)
y_conv = F.conv1d(y_pred, kernel_tensor, padding='same').view(y_pred.shape[0], -1)

return torch.mean((y_true - y_conv) ** 2)

After this, I initialize the model and trying to optimize the model:

Cobs = torch.tensor(twilite, dtype=torch.float32).to(device)

model = DeconvolutionNN().to(device)
optimizer = optim.Adam(model.parameters(), lr=0.01)
Cturehat =
train_loss =

epochs = 3000
for epoch in range(epochs):
model.train()
optimizer.zero_grad()
Ctrue_hat = model(Cobs)
loss = compute_loss(model, Cobs, Ctrue_hat, kernel, device)
loss.backward()
optimizer.step()
Cturehat.append(Ctrue_hat.detach().numpy())

if (epoch + 1) % 100 == 0:
    print(f'Epoch [{epoch+1}/{epochs}], Loss: {loss.item():.4f}')
    train_loss.append(loss.item())

The Loss I got is nearly 47112. I tried different things like increasing the training epochs, increase or decrease the learning rate but the Loss doesn’t change. I don’t know if my logic here is making sense or any posibile explaination that why this doesn’t work? Thanks