When working with nn.TripletMarginLoss - do we need to configure the optimizer to search for max values (instead of min values)?

According to my understanding we are searching the minimum values (wights) for loss function. i.e:

 loss = Loss(...) # calc loss value according to loss function metric
 loss.backward()  # calc the gradient
 optim.step()     # according to the gradient descent algorithm - go one step to the minimum

When dealing with TRIPLETMARGINLOSS (nn.TripletMarginLoss) we are searching the wights which will maximize this loss function.

So, How can we configure the optimizer (optim.Adam or optim.SGD) to find the maximum value and not the minimum value for the loss function ?

If I understand correctly, you want to optimize the network to maximize a loss function?

If so, you can just minimize the negative of that loss function, which in turn will maximize the loss.
i.e.,

loss = -Loss(...)
loss.backward()
optim.step()