Hello everyone,

I have a special request for the last layer of the encoder.

"The sum of all neurons in this layer must add up to one.

Example: tensor[0 -4 +5] --> 0-4+5 = 1

For this I have called a separate function in the “forward” function, as recommended in another post in this forum, which ensures this.

For this I show you the called function.

```
# ASC function (enforce, that abundance sum is equal to 1)
def ASC_function(inputTensor):
outputTensor = inputTensor.clone().cuda()
for n in range(inputTensor.size(0)):
for m in range(inputTensor.size(1)):
# pylint: disable=E1101 # Fehler in VSC muss hier hinzugefügt werden
outputTensor[n][m] = torch.div(inputTensor[n][m], torch.sum(inputTensor[n]).cuda()).cuda()
# pylint: enable=E1101 # Fehler in VSC muss hier hinzugefügt werden
return outputTensor
```

And my network.

```
# define the autoencoder network
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
# encoder
self.ecf_1 = nn.Sequential(
nn.Linear(in_features=91, out_features=9*3),
nn.BatchNorm1d(9*3),
nn.Linear(in_features=9*3, out_features=6*3),
nn.BatchNorm1d(6*3),
nn.Linear(in_features=6*3, out_features=3*3),
nn.BatchNorm1d(3*3),
nn.Linear(in_features=3*3, out_features=3),
nn.BatchNorm1d(3),
nn.LeakyReLU(300)
)
# encoder
self.ecf_2 = nn.Dropout(0.5)
# decoder
self.dcf_1 = nn.Linear(in_features=3, out_features=91)
self.dcf_2 = nn.Sigmoid()
def forward(self, x):
x = self.ecf_1(x)
x = ASC_function(x) # Call the ASC (nonnegative abundances constraint) Function
x = self.ecf_2(x)
x = self.dcf_1(x)
y = self.dcf_2(x)
return y
net = Autoencoder() # runn the Auto Encoder Network from above once
```

Unfortunately this slows down the training extremely.

Is there a more elegant solution?