It seems the change wasn’t done properly as the error message is still the same and the target tensor still has a single dimension.
Yes, 46 features sound correct assuming all 46 columns contain feature values.
It seems the change wasn’t done properly as the error message is still the same and the target tensor still has a single dimension.
Yes, 46 features sound correct assuming all 46 columns contain feature values.
Hi, sorry for the hundredth version of the same question but I see that this is the go to place for it
I’m using a code shared on github to model electronic health records. The data structure is:
[patient id, label, [ visit1, …,visitn]] where each visit is of this form: [time difference between previous visit, list of medical codes]
And this is an example from my training set where medical codes are turned into integers (after preprocessing).
[111, 0, [[[0], [1313]], [[12], [1313]], [[79], [1929]], [[29], [1007, 1930, 1931, 554, 1932, 1779]], [[6], [554]], [[20], [1933, 1934]], [[2], [1933, 1935, 1934, 1936]], [[27], [1267, 414]], [[6], [1935, 1929, 1937]], [[20], [1267]], [[1], [557, 477]]…]]
I get this error:
“…
le “/home/jupyter/.local/lib/python3.10/site-packages/torch/nn/functional.py”, line 3113, in binary_cross_entropy
raise ValueError(
ValueError: Using a target size (torch.Size([128, 1, 1])) that is different to the input size (torch.Size([128])) is deprecated. Please ensure they have the same size.”
I’ve read the previous comments but need a little more guidance or a translation about what this error has to do in the context of my problem as I have lists of lists.
This is the repository I’m using:
Any guidance is appreciated.
squeeze
the target in dim1
and dim2
to remove the additional dimensions with a size of 1 creating a matching shape to the model output.
Thank you for your quick response.
Originally, the code was like this:
loss = criterion(output, label_tensor)
I changed it to this:
output2=torch.squeeze(output,1)
loss = criterion(output, label_tensor)
which gave the error
Not sure if I used it in the correct place.
Use label_tensor = label_tensor.squeeze(1).squeeze(1)
and it should work.
Yes it worked! Thank you so much!