Hi there

I have tained an 1D Autoencoder NN, which you can find the its details below:

Autoencoder(

(encoder): Sequential(

(0): Conv1d(1, 5, kernel_size=(5,), stride=(2,))

(1): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)

(2): ReLU(inplace=True)

(3): Conv1d(5, 10, kernel_size=(5,), stride=(2,))

(4): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)

(5): ReLU(inplace=True)

(6): Conv1d(10, 15, kernel_size=(5,), stride=(2,))

(7): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)

(8): ReLU(inplace=True)

(9): Conv1d(15, 20, kernel_size=(4,), stride=(1,))

(10): ReLU(inplace=True)

)

(decoder): Sequential(

(0): ConvTranspose1d(20, 15, kernel_size=(1,), stride=(4,))

(1): ReLU(inplace=True)

(2): ConvTranspose1d(15, 10, kernel_size=(2,), stride=(4,))

(3): ReLU(inplace=True)

(4): ConvTranspose1d(10, 5, kernel_size=(9,), stride=(2,))

(5): ReLU(inplace=True)

(6): ConvTranspose1d(5, 1, kernel_size=(10,), stride=(2,))

(7): ReLU(inplace=True)

)

)

my loss decrease like 1/f function however, when I feed my NN with some data, I get almost same output for different input. Has any one had any similar experience before?

I should mention that my training sample was 65536 signal each has length of 94 points.

When I said feeding my data I meant I did this below:

for i in range(65536) :

out_put[i] = model(data_pixel2[i].unsqueeze(0))

which here data_pixel2 is my input.

Appreciate any comments.