Hi there, I’m trying to use DataParallel with one model implementation but got this weird error.
My code is roughly like this:
# constructing the NN
net = ...
device = torch.device('cuda:0')
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
net = nn.DataParallel(net)
net.to(device)
# got training data
tr = data.DataLoader(datasets.CIFAR10('../data', train=True, download=True, transform=transforms.ToTensor()),
batch_size=128, shuffle=True, num_workers=1, pin_memory=True)
for input_data, _ in tr:
inp = input_data.unsqueeze(-1).transpose(1, 4)
inp = inp.to(device)
output = net(inp)
This threw an exception as follows:
RuntimeError: Output 0 of BroadcastBackward is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
It says I’m doing some inplace operation which modified certain fields, but I don’t think I’m doing any in-place operations here. What could be wrong?