ValueError: only one element tensors can be converted to Python scalars - while using SVM

Hello all,
I have an autoencoder based model, from which I extract the encoded state and further put it into a support vector classifier(sklearn.svm.SVC) to classify. However, when converting the list of the encoded states(either explicitly or as scikit learn does implicitly) I get the error as mentioned in the subject.
Encoded tensor is of the shape - [1, 1, 313]

ValueError: only one element tensors can be converted to Python scalars

And here is my entire pipeline -

train_numpy = []
train_labels = []
model = Encoder()
model.load_state_dict(torch.load('path/encoder.pth'))
model.eval()
for param in model.parameters():
    param.requires_grad = False

for (x,y) in train_Data:
    encoded, out = model(x)
    encoded = encoded.squeeze().detach().numpy()
    y = y.squeeze().detach().numpy()
   train_numpy.append(x)
    train_labels.append(y)

train_numpy = numpy.asarray(train_numpy)
train_labels = numpy.asarray(train_labels)

SVM = svm.SVC(kernel='rbf')
SVM.fit(X=train_numpy, y=train_labels)

Could someone please point what could be going wrong,
Thanks in Advance

Hi,

Could you please add the error stacktrace?

Bests

Sure, here it is

Traceback (most recent call last):
  File "/usr/lib/python3.8/code.py", line 90, in runcode
    exec(code, self.locals)
  File "<input>", line 1, in <module>
  File "/pycharm/pycharm/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/pycharm/pycharm/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/main.py", line 50, in <module>
    train_numpy = numpy.array(train_numpy)
ValueError: only one element tensors can be converted to Python scalars

Unfortunately I still cannot figure it out, but here are some problems that may lead to this:

You have never used encoded. I think you probably is passing x instead of encoded to the train_numpy.append(x)? I think because of these, x has same structure as y and needs .detach().numpy() but you have done these on encoded but never passed it to train_numpy.

Thanks a lot for pointing it out @Nikronic, I completely overlooked it and does not throw the same error.