In case you have a series of tensor operation let say you have these dummy operations

a=Operation (t1) ====> on cuda

b= Operation (a) ====> on cuda

c= Operation (b)====> on cuda

d= Operation (removing tensor c to cpu)

convert d to cuda again

.

.

.

Does this style break the computation graph, I’m assuming it doesn’t, not sure though?

If it does is there any workaround ?

The reason why I’m asking because I have a problem with this Pytorch Function

torch.Tensor.svd(X_bar.cpu().T, some=True,compute_uv=True)

or

torch.svd(X_bar.cpu().T, some=True,compute_uv=True)

It always throws an error as below every a couple of Epochs.

svd_cpu: the updating process of SBDSDC did not converge (error: 17)