How put tensor with grad_fn into multiprocess Queue?

I try to put a tensor with grad_fn into a Queue. But the queue received the tensor without the grad_fn. Can I put the grad_fn into the queue with the tensor? If not, can I run tensor.backward() in the process and run optimizer.step() later in the main process?