for i, (inputs, targets) in enumerate(data_loader):
with torch.no_grad():
inputs = Variable(inputs)
targets = Variable(targets)
print("targets.data", targets.item())
It doesn’t seem to be giving the target value.
What does the print statement show instead?
It is printing the index numbers: targets.data tensor([0, 1, 2, 3, 4, 5]), if I print(target)
it gives error:
print(“targets.data”, targets.item())
ValueError: only one element tensors can be converted to Python scalars
, if I print( targets.item())
.item()
works only on one-element tensors.
What is wrong about print(target)
?
It seems to print the index not the values
Maybe these values are equal to the indices for the current batch.
Could you set shuffle=True
in your DataLoader
and run your code again or alternatively check the output for multiple target
tensors?
I used shuffle= True, I get the output as tensor([ 1192, 6908, 1663, 2771, 7581, 10296])
The class ranges from 1 to 84.
Total length if data is 10364
Could you post the code for your Dataset
?
data_loader = torch.utils.data.DataLoader(
test_data,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_threads,
pin_memory=True)
test.test(data_loader, model, opt, test_data.class_names,criterion,test_logger)
print("done")
test.py looks like:
for i, (inputs, targets) in enumerate(data_loader):
with torch.no_grad():
inputs = Variable(inputs)
targets = Variable(targets)
print("targets.data", targets)
outputs = model(inputs)
Could you post the code of test_data
, so that we could have a look how your targets are created?