1D Convolutional Autoencoder

Hello,

I’m studying some biological trajectories with autoencoders. The trajectories are described using x,y position of a particle every delta t. Given the shape of these trajectories (3000 points for each trajectories) , I thought it would be appropriate to use convolutional networks.

So, given input data as a tensor of (batch_size, 2, 3000), it goes the following layers:

            # encoding part
	self.c1 = nn.Conv1d(2,4,16, stride = 4, padding = 4)
	self.c2 = nn.Conv1d(4,8,16, stride = 4, padding = 1)
	self.c3 = nn.Conv1d(8,1,8, stride = 3, padding = 1)
	self.l1 = nn.Linear(60,20)

	# decoding part
	self.l2 = nn.Linear(50,20)
	self.d1 = nn.ConvTranspose1d(1,8,8,stride = 3, output_padding = 1, padding = 1)
	self.d2 = nn.ConvTranspose1d(8,4,16, stride = 4, output_padding = 1, padding = 1)
	self.d3 = nn.ConvTranspose1d(4,2,16, stride = 4, output_padding = 0)

It appears that this architecture doesn’t work very well, since I can’t get the loss below 1.9.
Could anyone shed some light on me as how to handle this case ? I believe that I’m using the convolution parameters (kernel size, padding and stride) in a bad way.

Thanks !

1 Like

You don’t have any nonlinearities.

1 Like