I want to use a Decision Tree with data from audio. I get this torch.Tensor for X_train
[tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 11, 0, 0, 28, 0, 0, 11, 0, 0, 20,
0, 4, 0, 0, 0, 0, 0, 0, 15, 0, 0, 0, 0, 4, 4, 0, 21, 0,
9, 0, 9, 0, 0, 0, 7, 0, 0, 25, 15, 15, 21, 0, 20, 7, 18, 0,
0, 18, 31, 4, 4, 13, 0, 11, 0, 26, 0, 4, 4, 26, 14, 0, 11, 4,
0, 19, 0, 21, 0, 20, 0, 0, 10, 0, 7, 0, 0, 31, 0, 4, 4, 0,
8, 0, 18, 27, 27, 11, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 25, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4]]),
tensor([[ 0, 0, 0, 0, 0, 0, 0, 10, 21, 0, 20, 20, 5, 26, 26, 0, 4, 4,
0, 0, 7, 0, 0, 25, 0, 17, 4, 4, 19, 19, 11, 4, 4, 26, 21, 21,
4, 0, 9, 0, 0, 7, 0, 24, 0, 0, 24, 0, 31, 4, 4, 7, 20, 0,
4, 4, 0, 0, 0, 0, 7, 0, 27, 0, 0, 0, 18, 0, 0, 31, 0, 0,
4, 4, 0, 24, 24, 0, 0, 7, 0, 0, 0, 0, 13, 0, 0, 0, 4, 18,
18, 15, 15, 0, 0, 17, 0, 11, 4, 4, 0, 26, 14, 14, 0, 0, 7, 0,
0, 0, 0, 0, 0, 26, 0, 0, 0, 0, 0, 0, 0, 0, 4]]),
tensor([[ 0, 0, 0, 0, 0, 0, 0, 0, 25, 25, 14, 21, 0, 27, 18, 10, 4, 4,
16, 0, 0, 15, 0, 0, 0, 24, 0, 0, 7, 0, 0, 0, 12, 0, 0, 0,
4, 8, 0, 0, 11, 0, 4, 0, 17, 0, 0, 11, 0, 0, 22, 0, 0, 26,
0, 4, 0, 15, 20, 0, 4, 4, 4, 0, 25, 0, 0, 0, 19, 0, 0, 0,
0, 7, 18, 0, 0, 18, 0, 4, 0, 32, 21, 21, 0, 21, 0, 0, 0, 0,
0, 0, 0, 25, 11, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4]]), .......
The size of those tensor are variable
X_train[0].size() -> torch.Size([1, 121])
X_train[1].size() -> torch.Size([1, 123])
X_train[2].size() -> torch.Size([1, 106])
When I try to apply a Decisiontreeclasifier I have a lot of problems.