Optimizing Neural Network Inputs: Ensuring Shared Gradients for Common Parameters Across Coordinates

Hello,
I try to optimize the input to a network where I aim to get the same parameters over a whole dimension of a tensor.

The input consists of process parameter and coordinates (input[coordinates,parameterfeatures]) and the output of the network gives back a quality for a object (output[coordinates,quality]).

Now I want to know the process parameters for the best quality of the object. But the process parameters should be for all coordinates be the same. Right now I get different process parameters for each coordinate.

Is there a way to “share” the gradient over a whole dimension of a tensor, so the process parameter get all updated the same over the coordinates?

My code looks like this:

input.requires_grad_(True)  # input.shape [1000,15]
optimizer = torch.optim.SGD([input], lr=0.1)

for iteration in range(num_iterations):
    
    result = net(input)    
    loss = result

    optimizer.zero_grad()
    loss.mean().backward()  
    
    input.grad[:,3:]=0 #only optimize the first three parameters
    optimizer.step()

Thanks!

I found the solution to what I wanted to do. To get the same values over the first dimension I created views with .expand(), now the parameter change is for all values the same.

optimizer = torch.optim.Adam([var], lr=0.1)
input = torch.cat((var.unsqueeze(0).expand(var_second.shape[0],-1),var_second),
                dim=1)

for iteration in range(num_iterations):
    optimizer.zero_grad()     
    
    loss = net(input)
    loss.mean().backward()

    optimizer.step()
    input = torch.cat((var.unsqueeze(0).expand(var_second.shape[0],-1),var_second),
                dim=1)