No error, but it is not executed

I am new here and I’m new to using the torch library for autoencoders

I want to implement a pre-written code with the same dataset used. There are no errors, but it is not executed. When I track the execution it seems that it is stuck in this loop(for batch_idx, (batch_data, batch_index) in enumerate(zip(input_trainLoader, pad_index_trainLoader)):slight_smile:

I waited 4 hours for it to complete the first cycle but nothing happened…what should I do…hope for help

I can not upload other pictures because I am new , should put the link for whole code?

If you are using multiprocessing in your DataLoaders, try to use the main process only for debugging via num_workers=0 and see if your code doesn’t hang anymore.

Hi,
I put num_workers=0 instead of num_workers=2,in the following rows:
input_trainLoader = torch.utils.data.DataLoader(input_train, batch_size=args.batch_size, shuffle=False, num_workers=0)
print(args.batch_size)#16
pad_index_trainLoader = torch.utils.data.DataLoader(pad_index_train, batch_size=args.batch_size, shuffle=False, num_workers=0)

but I get the following error

Traceback (most recent call last):
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\multivariate anomaly detection for evnt logs\AE.py”, line 206, in
train_loss = train(epoch, model, optimizer)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\multivariate anomaly detection for evnt logs\AE.py”, line 172, in train
for batch_idx, (batch_data, batch_index) in enumerate(zip(input_trainLoader, pad_index_trainLoader)):
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\dataloader.py”, line 681, in next
data = self._next_data()
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\dataloader.py”, line 721, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data_utils\fetch.py”, line 52, in fetch
return self.collate_fn(data)
File “C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data_utils\collate.py”, line 147, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem.dtype))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found object

can I get help

Your Dataset.__getitem__ doesn’t seem to return any supported type, but object"s.
Check the type of all returned values and make sure they are one of the mentioned supported types.

I am sorry if the Question Might Seems Obvious or Dumb but I am new to autoencoders:
I can not find .__getitem__ I do not know where to find it in the pre_written code or maybe it does not exist

I searched for this error I found that solution:
transform = transforms.Compose([
transforms.ToTensor()
])
tnew=transform(input_train)
#train
input_trainLoader = torch.utils.data.DataLoader(tnew, batch_size=args.batch_size, shuffle=False, num_workers=0)

but I got this error:
TypeError: can’t convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

I fixed this error as following:
transform = transforms.Compose([
transforms.ToTensor()
])
tt=np.vstack(input_train).astype(float)
tnew=transform(tt)
#train
input_trainLoader = torch.utils.data.DataLoader(tnew, batch_size=args.batch_size, shuffle=False, num_workers=0)

but I returned to the first error
this is my input_train


and this is the output
input_trainLoader = torch.utils.data.DataLoader(tnew, batch_size=args.batch_size, shuffle=False, num_workers=0)###object
print(args.batch_size)#16
print(input_trainLoader)#<torch.utils.data.dataloader.DataLoader object at 0x000001B93DD52680>

pad_index_trainLoader = torch.utils.data.DataLoader(pad_index_train, batch_size=args.batch_size, shuffle=False, num_workers=0)

print(pad_index_trainLoader)#<torch.utils.data.dataloader.DataLoader object at 0x000001B93DD524D0>