@albanD Thanks for you answer.
In my case, torch.cat()
is much slower than list.append()
.
So I’m triying to change my code to use append() to list
.
new_dataset_x = []
new_dataset_y = []
for step, data in train_loader:
inputs, labels = data # inputs.shape == [64,3,28,28]
...
# Get some imgs from inputs under some conditions
new_dataset_x.append(ok_imgs) # ok_imgs.shape == [??,3,28,28]. first dim size is not equal on every step.
new_dataset_y.append(ok_labels) # ok.labels.shape == [??]
As a result, each element of new_dataset have different size of 1st dimension.
new_dataset_x[0].shape : [54,3,28,28]
new_dataset_x[1].shape : [34,3,28,28]
...
How can I make DataLoader with this?
Or this approach is inefficient, please recommend another way.