My custom loss function would not work

Hi, I created a custom loss function in which the loss is calculated based on silhouette score. Here is my loss function:

class ClusteringLoss(torch.nn.Module):
    def __init__(self):

    def forward(self, embeddingData, origData, k):
        repData = embeddingData.detach().numpy();
        kmeansModel = KMeans(n_clusters=k).fit(repData);
        yhat = kmeansModel.labels_;
        silhouetteScore = metrics.silhouette_score(origData, yhat);
        print("SilhouetteScore1: {}".format(silhouetteScore));
        loss = Variable( torch.tensor(silhouetteScore), requires_grad = True);
        loss = alpha*(1-loss);
        return loss;

The argument embeddingData is the output from the last layer of the model and it is a part of the gradient flow. I needed to use it to run Kmeans clustering and then calculate the silhouette score. Since the computed silhouette score can indicate how well the clustering result is, so I used it as the loss for my loss function.
However, I noticed that the gradients would not be updated after running a few epochs. I think the gradient flow was cut off, but I do not know which part of the code causes the problem. Any help will be appreciated. Thank you so much.

You are explicitly detaching the output of your model, which will cut the computation graph and will thus explain why your model does not learn anything.
Besides that you are also using 3rd party libraries (in this case numpy and scikit-learn), which Autograd also won’t be able to understand. You would either have to implement the used methods in PyTorch directly or a custom autograd.Function where the forward and backward need to be implemented manually.
Also, wrapping the already detached tensor into the deprecated Variable will not reattach it to the computation graph.

Thank you so much for taking the time to respond to my post. I appreciate it. It is so nice of you to do that. Thank you so much.