Hi, all. Now I get a gpu variable tensor from my network output, and I want to apply some operations from scipy on it, for example, dilate, erode and gaussian blur. What I want to know is that if I can finish the task as following:

gpu variable tensor -> cpu variable tensor -> cpu numpy array -> cpu numpy array after scipy operation -> cpu variable tensor -> gpu variable tenor. Will the above method work ？ Will above method influence the gradient? If it can’t achieve the goal, can you tell me some other ways to accomplish the request?

if you have a scipy path, you generally have to write an autograd.Function, because

- converting to numpy array has to be done with
`x.data.numpy()`

which detaches the gradient path. - pytorch doesn’t know the gradient of a scipy function

See this page for an example:

http://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html