Feeding new data to a trained model

Hi there

I have trained a conv NN, for 1D (signal) data. my data size is (65535,1,94) which 65535 is number of my samples, 1 is input channel and 94 is length of the signal.

Now I am going to use my trained model for another data set that has the same dimension.
I tried to run this but it did not work:

out_put = torch.zeros(65535,1,94)
for i in range (65535):
out_out[i,1,94] = model (in_put [i,1,94])

below u can also see my network
Autoencoder(
(encoder): Sequential(
(0): Conv1d(1, 5, kernel_size=(5,), stride=(2,))
(1): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)
(2): ReLU(inplace=True)
(3): Conv1d(5, 10, kernel_size=(5,), stride=(2,))
(4): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)
(5): ReLU(inplace=True)
(6): Conv1d(10, 15, kernel_size=(5,), stride=(2,))
(7): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)
(8): ReLU(inplace=True)
(9): Conv1d(15, 20, kernel_size=(4,), stride=(1,))
(10): ReLU(inplace=True)
)
(decoder): Sequential(
(0): ConvTranspose1d(20, 15, kernel_size=(1,), stride=(4,))
(1): ReLU(inplace=True)
(2): ConvTranspose1d(15, 10, kernel_size=(2,), stride=(4,))
(3): ReLU(inplace=True)
(4): ConvTranspose1d(10, 5, kernel_size=(9,), stride=(2,))
(5): ReLU(inplace=True)
(6): ConvTranspose1d(5, 1, kernel_size=(10,), stride=(2,))
(7): ReLU(inplace=True)
)
)

is anyone has any idea about this ?
thank you

You would be able to get help if you post the error message as well.

Hi

below you can see that


IndexError Traceback (most recent call last)

in () 1 for i in (data_pixel2[0]) : ----> 2 out_put[i,1,94] = model (data_pixel2[i,1,94])

IndexError: index 1 is out of bounds for dimension 1 with size 1

Shouldn’t it be out_put[i,0,94] = model (data_pixel2[i,0,94])?

Python is 0-indexed.

1 Like

Hi

Actually I got this error again
IndexError Traceback (most recent call last)

in () 1 for i in (data_pixel2[0]) : ----> 2 out_put[i,0,94] = model (data_pixel2[i,0,94])

IndexError: index 94 is out of bounds for dimension 2 with size 94’

Python is zero-based as InnovArul already pointed out, index 94 doesn’t exist. That should solve the index problem:
out_put[i] = model(data_pixel2[i]) which is the same as
out_put[i,:,:] = model(data_pixel2[i,:,:])
since you want to input an array of shape (1,94).

In general, I would recommend looking at a pytorch tutorial regarding the structure optimizer and co., e.g. https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

2 Likes

Thank you for replying me back
actually it didn’t work also, I got below error :slight_smile:
RuntimeError Traceback (most recent call last)

in () 1 for i in range(65536) : ----> 2 out_put[i,:,:] = model (data_pixel2[i,:,:])

5 frames

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 200 _single(0), self.dilation, self.groups) 201 return F.conv1d(input, self.weight, self.bias, self.stride, --> 202 self.padding, self.dilation, self.groups) 203 204

RuntimeError: Expected 3-dimensional input for 3-dimensional weight 5 1 5, but got 2-dimensional input of size [1, 94] instead

Oke, the input to the 1D Convolutional layer must be of shape batch size x channel x spatial_size. In your case, it’s only channel x spatial_size. This can be fixed by model(data_pixel2[i].unsqueeze(0)) which adds another dimension in the tensor to be of shape (1,1,94)

My data_pixel2 is already (65535,1,94)

here 65536 means number of samples that I do have

you’re slicing the first dimension with data_pixel2[i] which leads to a shape of (1,94)

1 Like

I see, what should I do now? :frowning:

I actually fixed that error, however I now face with this one :
RuntimeError Traceback (most recent call last)

in () 1 for i in range(65536) : ----> 2 out_put[i] = model(data_pixel2[i].unsqueeze(0))

5 frames

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input) 200 _single(0), self.dilation, self.groups) 201 return F.conv1d(input, self.weight, self.bias, self.stride, → 202 self.padding, self.dilation, self.groups) 203 204

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same

load your input into VRAM with .cuda()

1 Like

I did below
out_put = torch.zeros([65536,1,94],dtype=torch.float)

out_put = out_put.cuda()

still same error …

Please have a look into the Pytorch tutorials

Could you push data_pixel2 also to the GPU?
The error points to a CPU input tensor, while your model parameters are on the GPU.

1 Like

Thanks Patrick
U were right, it fixed :slight_smile:
U are really an expert (Y)

1 Like