Grad updated after converting tensor to numpy, modifying, converting back?

If I do some operations on a tensor in numpy/pandas, new_stuff = X.detach().cpu().numpy(), then re-assign to Xprime = torch.tensor(new_stuff, requires_grad=True), will the params update when I feed it into the model, output = model(Xprime)?

All operations inside model will be tracked by Autograd and the parameters will get a valid gradient during the backward pass.
However, the operations preceding the Xprime creation will not be tracked, since you detached the tensor and also used numpy (through pandas) to calculate new_stuff.