It is recommended to use source Tensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True)

I changed torch.tensor(x) to torch.tensor(x).clone().Detach() but the problem is not solved.
Do you know what I am doing wrong here?
Thanks in advance!

for epoch in range(num_epochs): 
    outputs = []
    outputs = torch.tensor(outputs, requires_grad=True)
    outputs= outputs.clone().detach().cuda()
    for fold in range(0, len(training_data), 5): # we take 5 images 
        xtrain = training_data[fold : fold+5]
        xtrain = torch.tensor(xtrain, requires_grad=True).clone().detach().float().cuda() 
        xtrain = xtrain.view(5, 3, 120, 120, 120) 
        # Clear gradients
        #optimizer.zero_grad() 
        # Forward propagation
        optimizer.zero_grad() 
        v = model(xtrain)
        v = torch.tensor(v, requires_grad=True).clone().detach()
        outputs = torch.cat((outputs,v),dim=0)
        # Calculate softmax and ross entropy loss
    targets = torch.Tensor(targets).clone().detach()
    labels = targets.cuda()
    outputs = torch.tensor(outputs,  requires_grad=True) 
    _, predicted = torch.max(outputs, 1) #prendre valeur maximale [0.96  0.04] ==> 0 (position de classe)
    accuracy = accuracyCalc(predicted, targets)
    labels = labels.long() 
    labels=labels.view(-1) 
    loss = nn.CrossEntropyLoss()
    loss = loss(outputs, labels)    
    # Calculating gradients
    loss.backward()
    # Update parameters
    optimizer.step()
    loss_list_train.append(loss.clone()) 
    accuracy_list_train.append(accuracy/100)
    np.save('Datasets/brats/accuracy_list_train.npy', np.array(accuracy_list_train))
    np.save('Datasets/brats/loss_list_train.npy', np.array(loss_list_train)) 
    print('Iteration: {}/{}  Loss: {}  Accuracy: {} %'.format(epoch+1,  num_epochs, loss.clone(), accuracy))
print('Model training  : Finished')

result :

UserWarning: To copy construct from a tensor, it is recommended to use source
Tensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True),
 rather than torch.tensor(sourceTensor)

The warning points to wrapping a tensor in torch.tensor, which is not recommended.
Instead of torch.tensor(outputs) use outputs.clone().detach() or the same with .requires_grad_(True), if necessary.

5 Likes

Thank you so much @ptrblck