Hello!

I’m really enjoying using PyTorch for classification and regression. I have an interesting problem and I can’t quite figure out the solution, I feel like I’m really close.

My problem:

I have created a network with three outputs, let’s call them x, y and z

I have a function F(x, y, z) that returns a value between 0.0 and 100.0 where 100 is better

My custom loss thus is 100-F(x,y,z) at each step

The goal is to figure out the best combination of outputs for problem F(…)

(I know a genetic algorithm will outperform this, that’s my project right now to prove it on an array of problems)

To implement the above, I force the network to have 1 piece of input data and a batch size of 1, and then in the fitness we just completely ignore the ‘true’ and ‘predicted’ values and replace the loss with 100-F(x,y,z). Basically our weights and outputs will lead to one solution at every epoch.

Outputs are rounded to integers since F(…) requires them. To prevent this from being an issue, I have a large momentum and learning rate.

The issue I’m having is that, although the loss function is running and my first [x,y,z] is being evaluated, the values never change. The network isn’t learning from the results produced.

My code is as follows:

Note testnetwork() is too long to paste but it is the F(x,y,z) mentioned above - any dummy function can replace it eg. 'return x+z*y/2’ etc. to minimise this function (100 - x+z*y/2)

```
import torch
import torch.nn as nn
from testnetwork import *
n_in, n_h, n_out, batch_size = 10, 5, 3, 5
x = torch.randn(batch_size, n_in)
y = torch.tensor([[1.0], [1.0], [1.0], [1.0], [1.0], [1.0], [1.0], [0.0], [1.0], [1.0]])
model = nn.Sequential(nn.Linear(n_in, n_h),
nn.ReLU(),
nn.ReLU()
)
def fitness(string):
print(string)
list = string.split(",")
list[0] = (int(round(float(list[0]))))
list[1] = (int(round(float(list[1]))))
list[2] = (int(round(float(list[2]))))
print(list)
loss = 100 - testnetwork(list[0], list[1], list[2])
return loss
def my_loss(output, target):
table = str.maketrans(dict.fromkeys('tensor()'))
ftn = fitness(str(output.data[0][0]).translate(table) + ", " + str(output.data[0][1]).translate(table) + ", " + str(output.data[0][2]).translate(table))
loss = torch.mean((output - output)+ftn)
return loss
#optimizer = torch.optim.SGD(model.parameters(), lr=1, momentum=2)
optimizer = torch.optim.Adam(model.parameters(), lr=1, momentum=2)
for epoch in range(10):
# Forward Propagation
y_pred = model(x)
# Compute and print loss
loss = my_loss(y_pred, y)
print('epoch: ', epoch,' loss: ', loss.item())
# Zero the gradients
optimizer.zero_grad()
# perform a backward pass (backpropagation)
loss.backward(retain_graph=True)
# Update the parameters
optimizer.step()
```

Thank you so much for reading my post!

Jordan

Edit: This is the console output if you want to see it

```
epoch: 0 loss: 50.339725494384766
0., 0.0200, 0.6790
[0, 0, 1]
testing: [0, 0, 1]
epoch: 1 loss: 50.339725494384766
0., 0.0200, 0.6790
[0, 0, 1]
testing: [0, 0, 1]
epoch: 2 loss: 50.339725494384766
0., 0.0200, 0.6790
[0, 0, 1]
testing: [0, 0, 1]
epoch: 3 loss: 50.339725494384766
0., 0.0200, 0.6790
[0, 0, 1]
testing: [0, 0, 1]
epoch: 4 loss: 50.339725494384766
0., 0.0200, 0.6790
[0, 0, 1]
```

…and so on, nothing seems to change from epoch to epoch.