I want to run pytorch program with my dataset.Please help me in solving the problem.The error is as follows:
Traceback (most recent call last):
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 311, in
fire.Fire(demo)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 468, in _Fire
target=component.name)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 289, in demo
n_epochs=n_epochs, batch_size=batch_size, seed=seed)
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 168, in train
n_epochs=n_epochs,
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 42, in train_epoch
for batch_idx, (input, target) in enumerate(loader):
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py”, line 346, in next
data = self._next_data()
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data\dataloader.py”, line 386, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data_utils\fetch.py”, line 47, in fetch
return self.collate_fn(data)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data_utils\collate.py”, line 87, in default_collate
return [default_collate(samples) for samples in transposed]
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data_utils\collate.py”, line 87, in
return [default_collate(samples) for samples in transposed]
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data_utils\collate.py”, line 72, in default_collate
return default_collate([torch.as_tensor(b) for b in batch])
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\torch\utils\data_utils\collate.py”, line 63, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [650] at entry 0 and [108] at entry 1
Thanks in advance!!
Based on the error message if seems that the tensors stored in batch
, which should be created in the Dataset.__getitem__
method, do not have the same shape.
If you are dealing with variable input shapes, you could use a custom collate function as described here.
Thank you very much for answering.But the error appears:
Traceback (most recent call last):
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 320, in
fire.Fire(demo)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 468, in _Fire
target=component.name)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 298, in demo
n_epochs=n_epochs, batch_size=batch_size, seed=seed)
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 177, in train
n_epochs=n_epochs,
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 54, in train_epoch
input = input.cuda()
AttributeError: ‘numpy.ndarray’ object has no attribute 'cuda
Although ‘input’ is converted as torch.tensor,ValueError: expected sequence of length 528 at dim 1 (got 76) appears.Please answer how can I solve.Thank a lot!
The first error is created, since input
seems to be a numpy array.
You can check the type via print(type(input))
.
Could you post the complete error message for the second issue and which line of code is raising this error?
Thank you very much!When’print(type(input))’,the line shows :type(input) <type(input) <class ‘list’>
The code rising this error:
for batch_idx,(input, target) in enumerate(loader):
# Create vaiables
if torch.cuda.is_available():
print(‘type(input)’,type(input))
input = input.cuda()
The error message is :Traceback (most recent call last):
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 323, in
fire.Fire(demo)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 468, in _Fire
target=component.name)
File “C:\Users\hp\Anaconda3\envs\tf-gpu\lib\site-packages\fire\core.py”, line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 301, in demo
n_epochs=n_epochs, batch_size=batch_size, seed=seed)
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 180, in train
n_epochs=n_epochs,
File “C:\Users\hp\Downloads\efficient_densenet_pytorch-master\demoEmotion.py”, line 56, in train_epoch
input = input.cuda()
AttributeError: ‘list’ object has no attribute ‘cuda’
Now when I run again,list object has no ‘cuda’. Please answer!Thanks a lot in advance!
As the error explains, list
objects do not have the .cuda()
method, which is a tensor method.
Could you unpack the list and call cuda()
on each tensor inside the list?
Thanks!How to unpack and call?Please explain.When I convert to torch.tensor,ValueError: expected sequence of length 528 at dim 1 (got 76) appears.
Since it’s a list
, you could get each value via:
tensor0 = input[0]
tensor1 = input[1]
tensor2 = input[2]
...
Thank you very much!The problem above is solved.But ‘RuntimeError: Expected 4-dimensional input for 4-dimensional weight [24, 3, 3, 3], but got 3-dimensional input of size [3, 3, 158] instead’ error appears.Thanks a lot!
The input tensor might be missing the batch dimension, if the current shapes correspond to [channels, height, width]
.
If so, you could add the batch dimension via x = x.unsqueeze(0)
.
Thanks a lot!However the error ‘RuntimeError: Given input size: (150x1x79). Calculated output size: (150x0x39). Output size is too small’ appears.Why does the error appear?Thanks!