Render+Error:There are no graph nodes that require computing gradients

Hi, I am trying to getting the texture from a picture. To do so, I am using the Loss of Gram Matrices in different layers of a VGG19. Furthermore, to create the picture I am rendering it from some SVBRDF maps as it is done in https://github.com/keunhong/svbrdf-renderer. (some openGL has been used for that and I am afraid I am not really into it)
My aim is not just to get the texture changing the picture itself but the data of these maps.

My model already works when I am doing the changes directly in the image. However, I get the following error when I want to do this changes in the different matrices I use to do the render with:

RuntimeError: there are no graph nodes that require computing gradients

I do have set the maps as variable which requires grad:

 svbrdf_params = {
    'map1': Variable(ini_torch(2000, 2000, 3) + 1, requires_grad=True),
    'map2': Variable(ini_torch(2000, 2000, 3) + 1, requires_grad=True),
    'map3': Variable(ini_torch(2000, 2000, 3) + 1, requires_grad=True),
    'map4': Variable(ini_torch(2000, 2000, 3) + 1, requires_grad=True),
    'arr': Variable(alpha_beta_torch(0.4, 0.4), requires_grad=True)} 

but since when I am doing the render I have to convert them to numpy (otherwise the render would not work) I am afraid I am loosing somehow the information for the gradient.

As I said the loss function I am dealing with is using the Gram matrix of both images

loss = loss + torch.sum((gram - texture_target[i]) *
                            (gram - texture_target[i])) * texture_weights[i]

once it has been rendered, I am not sure but this could bring me also problems since in the loss function could there be no information of the initial maps.

Does someone has any idea of how should I proceed? please tell me if you need some extra information because as the code is pretty big I didn’t want to write it all but I don’t know if I am missing some relevant part.