TypeError: len() of a 0-d tensor

Hi,
Kindly help me to solve the error arising due to assert len(…) = len(…) stmt which was initially written using torch 0.2.0 torchvision 0.1.9 but is causing an issue in torch 1.5.1 torchvision 0.5.0 (for compatibility with cuda 10.1).

Traceback (most recent call last):
File “./main.py”, line 186, in
cuda=cuda
File “/home/js/GR/train.py”, line 102, in train
collate_fn=collate_fn,
File “/home/js/GR/dgr.py”, line 130, in train_with_replay
collate_fn=collate_fn,
File “/home/js/GR/dgr.py”, line 205, in _train_batch_trainable_with_replay
callback(trainable, progress, batch_index, result)
File “/home/js/GR/train.py”, line 157, in cb
result[‘g_loss’], ‘generator g loss’, iteration, env=env
File “/home/js/GR/visual.py”, line 88, in visualize_scalar
[name], name, iteration, env=env
File “/home/js/GR/visual.py”, line 93, in visualize_scalars
assert len(scalars) == len(names)
File “/home/js/anaconda3/envs/env_con/lib/python3.5/site-packages/torch/tensor.py”, line 445, in len
raise TypeError(“len() of a 0-d tensor”)
TypeError: len() of a 0-d tensor

A long time ago, PyTorch, here 0.2, didn’t have scalar (0-dimensional) Tensors and so you would have to use tensors of shape [1]. Thus tensors could always act like sequences.
Nowadays, we do have scalar Tensors and these don’t act like sequences.

Likely you want to look more closely at what type of shapes can happen there and why.

Porting from PyTorch 0.2 does take some effort (there also is the Variable/Tensor merge that such code would not know about yet, see the 0.4 migration guide).