1D convolutional Neural Network architecture


I’m using Python/Pytorch since a week, so I’m totally new to it. So the code I wrote is just obtained peeking around the guides and topics.I read lots of things around about it but right now I’m stuck and i don’t know where the problem is.

I would like to train a 1D CNN and apply it. I train my net over vectors (I read all around that it’s kind of nonsense, but I have to) that I generated using some geostatistics, and than i want to see the net performances over a new model that I didn’t use for the training. So ‘imp_p’ and ‘data_sim’ are respectively the input/output I use for the training; they are matrices (104,50) whose columns represent the simulations. ‘imp_true’ is the model that i want to apply to the performed net. Here is the code I’m using:

 def da_a(data):  # this function turns a numpy vector into a torch tensor of the right dimensions
     ciccio1 = torch.from_numpy(data).float()
     ciccio2 = torch.unsqueeze(ciccio1,0)
     ciccio3 = torch.unsqueeze(ciccio2,0)
     return ciccio3

 num_epochs = 100
 learning_rate = 0.001

 class ConvNet(nn.Module):

    def __init__(self):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv1d(1, 60, kernel_size=2, stride=1, padding=1),
        self.layer2 = nn.Sequential(
            nn.Conv1d(60, 1, kernel_size=2, stride=1),

    def forward(self, x):
         out = self.layer1(x)
         out = self.layer2(out)
         return out  

 model = ConvNet() 
 criterion = nn.MSELoss()
 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

 acc_list = [] 
 for x in range(num):

    for i in range(num_epochs):
        # Run the forward pass
        imp, seis = da_a(imp_p[:,x]), da_a(data_sim[:,x])
        outputs = model(imp)
        loss = criterion(outputs, seis)

        # Backprop and perform Adam optimisation
        if (i + 1) % 100 == 0:  

 # now i apply the net to a model that wasn't used before: imp_true

 outputs = model(Imp_true)

So result I obtain is subplot(121) show something flat, which is obviously wrong.
The net performances show something that after about 10 iterations reach the 0 value: since a i use a linear relation between input/output this may be right…

Any help/hint would be great
thank you!

The loss reaching 0 doesn’t mean error.

Can u explain more about the input size and output size of the Net?

thank you for your reply!

I want to generate new vector ( size: 104,1) from an input vector (same input size) using the net. So I use a net with 2 layers. Each layer is made by a 1d CNN with ReLu activation function. The input/output are related through a linear function I didn’t post since is quite simple, but the problem seems to be inside the net… did I miss something in the code? It always runs but the output i get from the net is always equal to zero.

thank you!

perhaps all the inputs r negative: ReLU gives 0 for all negative inputs. Also, using convolution kernels that reduce the number of channels greatly can decrease the performance greatly.

so the output i’m trying to obtain is actually a zero-mean signal, but not the input!

“Also, using convolution kernels that reduce the number of channels greatly can decrease the performance greatly.” can you explain better? what should i change?

the “output” is the model output I suspect: outputs = model(Imp_true)
The number of channels of kernels should be in an increasing trend, or at least stay the same.