I was using this example code in a deep q network but it keeps throwing an error “TypeError: expected Tensor as element 0 in argument 0, but got int” and the line in question is “action_batch = torch.cat(batch.action)” I have tried to use cpu and cuda but neither work any help would be apreciated.
I guess this error might be raised, if the batch size was set to 1 and batch.action
might then be a single scalar value / integer? If so, you could most likely use torch.tensor(batch.action)
instead of torch.cat
.
Hi @ptrblck ,
I was using the following code and got the similar error:
all_loss[phase] = torch.cat((all_loss[phase], loss.detach().view(1, -1)))
all_loss[phase] = torch.cat((all_loss[phase], loss.detach().view(1, -1)))
TypeError: expected Tensor as element 0 in argument 0, but got numpy.float32
I tried to print the output:
print(">>>>>", loss.detach().view(1, -1), type(loss.detach().view(1, -1)), loss.detach().view(1, -1).dtype)
and got:
>>>>> tensor([[0.4254]], device='cuda:0') <class 'torch.Tensor'> torch.float32
>>>>> tensor([[0.5419]], device='cuda:0') <class 'torch.Tensor'> torch.float32
Traceback (most recent call last):
File "/home/banikr2/PycharmProjects/Blockface/Codes/Main.py", line 131, in <module>
all_loss[phase] = torch.cat((all_loss[phase], loss.detach().view(1, -1)))
TypeError: expected Tensor as element 0 in argument 0, but got numpy.float32
Could you help to debug it?
Btw phase
is just either train
or val
.
Could you check the type
of all_loss[phase]
and in case it’s containing more than one object, the type
of the first object?
Based on the error message it seems that all_loss[phase]
is already containing a numpy array, which will raise this error.
I am actually trying to implement the codes here:
print(">", all_loss)
> {'train': tensor([0.], device='cuda:0'), 'valid': tensor([0.], device='cuda:0')}
print(">>", all_loss[phase], type(all_loss[phase]))
>> tensor([0.], device='cuda:0') <class 'torch.Tensor'>
print(">>>", loss.detach().view(1, -1), type(loss.detach().view(1, -1)), loss.detach().view(1, -1).dtype)
>>> tensor([[0.5484]], device='cuda:0') <class 'torch.Tensor'> torch.float32
These objects seem to work after fixing the shape mismatch:
all_loss = {'train': torch.tensor([0.], device='cuda:0'),
'valid': torch.tensor([0.], device='cuda:0')}
print(all_loss)
> {'train': tensor([0.], device='cuda:0'), 'valid': tensor([0.], device='cuda:0')}
loss = torch.tensor([[0.5484]], device='cuda:0')
print(loss)
> tensor([[0.5484]], device='cuda:0')
phase = 'train'
all_loss[phase] = torch.cat((all_loss[phase], loss.detach().view(1, -1)))
> RuntimeError: Tensors must have same number of dimensions: got 2 and 1
all_loss[phase] = torch.cat((all_loss[phase], loss.detach().view(-1)))
print(all_loss)
> {'train': tensor([0.0000, 0.5484], device='cuda:0'), 'valid': tensor([0.], device='cuda:0')}
Made the two changes you’ve mentioned but getting the same error:
all_loss[phase] = torch.cat((all_loss[phase], loss.detach().view(-1)))
TypeError: expected Tensor as element 0 in argument 0, but got numpy.float32
Change/diff:
# all_loss = {key: torch.zeros(1).to(device) for key in phases}
all_loss = {'train': torch.tensor([0.], device='cuda:0'),
'valid': torch.tensor([0.], device='cuda:0')}
Are you getting this error also using my code snippet or only yours?
In the latter case, it seems you are still adding a numpy array to all_loss
, so would still have to narrow down the offending line of code.
I tried to implement 3 Bayesian NN simultaneously each outputs a tuple, however when I combined their results using torch cat I got the same error: ValueError: only one element tensors can be converted to Python scalars.
my code is as following:
self.bcnn1=nn.Sequential(bcnn1d(in_channels=self.in_channels, out_channels=8,
kernel_size=5, padding=2, dilation=1),
BN(num_features=8))
self.bcnn2=nn.Sequential(bcnn1d(in_channels=self.in_channels, out_channels=8,
kernel_size=5, padding=6, dilation=3),
BN(num_features=8))
self.bcnn3=nn.Sequential(bcnn1d(in_channels=self.in_channels, out_channels=8,
kernel_size=5,padding=12, dilation=6),
BN(num_features=8))
bcnn_out = self.bcnn(torch.cat(bcnn1, bcnn2, bcnn3),dim=1)
however, I tried co convert each one onf the bcnn to a tensor but got ValueError: only one element tensors can be converted to Python scalars
:
bcnn_out = self.bcnn(torch.cat(torch.Tensor(bcnn_out1), torch.Tensor(bcnn_out2), torch.Tensor(bcnn_out3))
Could you post the entire stacktrace and a minimal, executable code snippet to reproduce the issue, please?
OK
I tried to implement 3 Bayesian NN simultaneously each outputs a tuple, however when I combined their results using torch.cat
I got the same error: ValueError: only one element tensors can be converted to Python scalars.
my code is as following:
however, I tried co convert each one one of the bcnn to a tensor but got ValueError: only one element tensors can be converted to Python scalars
:
bcnn_out = self.bcnn(torch.cat(torch.Tensor(bcnn_out1), torch.Tensor(bcnn_out2), torch.Tensor(bcnn_out3))