Hi, i am a newbie guy… and try to train an rbf network… I used MNIST database. And pytorch framework… The results are the same in each epoch…

Like that…

```
Epoch: 1
Accuracy: 0.815 Loss: 5.701 Recall: 0.507 Precision: 0.340
Epoch: 2
Accuracy: 0.815 Loss: 5.628 Recall: 0.507 Precision: 0.340
Epoch: 3
Accuracy: 0.815 Loss: 5.570 Recall: 0.507 Precision: 0.340
Epoch: 4
Accuracy: 0.815 Loss: 5.523 Recall: 0.507 Precision: 0.340
Epoch: 5
Accuracy: 0.815 Loss: 5.486 Recall: 0.507 Precision: 0.340
Epoch: 6
Accuracy: 0.815 Loss: 5.456 Recall: 0.507 Precision: 0.340
```

and that’s happens with several rbf settings… i m change the way of centers inits, sigma inits, number of clusters and batch size, learning rate… but still the same… it used to repeat the result of the first epoch following epochs, sometimes the loss changes just… like above

look my code:

```
class RBF(nn.Module):
def __init__(self, in_layers, centers, sigmas):
super(RBF, self).__init__()
self.in_layers = in_layers[0]
self.centers = nn.Parameter(centers)
self.dists = nn.Parameter(torch.ones(1,centers.size(0)))
# self.linear0 = nn.Linear(in_layers[0], in_layers[0], bias = True)
self.linear1 = nn.Linear(centers.size(0), in_layers[1], bias = True)
def forward(self, x):
phi = self.radial_basis(x)
out = torch.sigmoid(self.linear1(phi.float()))
return out
def radial_basis(self,x):
c = self.centers.view(self.centers.size(0),-1).repeat(x.size(0), 1, 1)
x = x.view(x.size(0),-1).unsqueeze(1).repeat(1, self.centers.size(0),1)
phi = torch.exp(-self.dists.mul((c-x).pow(2).sum(2, keepdim=False).pow(0.5) ))
return phi
```

i have try and this radial_basis with same results:

```
def radial_basis(self,x):
x = x.view(x.size(0),-1)
size = [self.centers.size(0), x.size(0)]
sigma = self.sigmas
dists = torch.empty(size).to(device)
for i,c in enumerate(self.centers):
c = c.reshape(-1,c.size(0))
temp = (x-c).pow(2).sum(-1).pow(0.5)
dists[i] = temp
dists = dists.permute(1,0)
phi = torch.exp(-1*(dists/(2*sigma))) #gaussian
return phi
```

and the training method below:

```
def training(engine, batch, device, model, criterion, optimizer):
inputs, labels = batch[0].to(device), batch[1].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
return outputs, labels
```

i m not sure if it’s architecture’s problem… or something with the weights… it’s like backward() function not working…