PyTorch equivalent of "Gaussian Mixture Models" implemented in Scikit-Learn"

I want to use Gaussian Mixture Models initiated with K-Means to do cluster analysis for a data set with 6 features, i.e. unsupervised learning to detect potential classes, or groups, in the data set.

I know how to do this through Gaussian Mixture Models in Scikit-Learn, as shown below:

# init GMM with K-Means
gm_kmeans = GaussianMixture(
    n_components = 30, 
    max_iter= 1000, 
    tol = 1e-4, 
    init_params= 'kmeans', 
)

# predicated class
y = gm_kmeans.fit_predict(data.iloc[:, 2:8])    

# predicted probabilities belonging to each of the classes
y_proba = gm_kmeans.predict_proba(data.iloc[:, 2:8])   

However, this algorithm usually takes a long time to calculate via CPU, and Scikit-Learn is not designed to utilize GPU for parallel processing in this regard.

So, I’d like to ask if there is a PyTorch equivalent to this algorithm. Or, how to implement Gaussian Mixture Models init with K-Means for unsupervised classification that can utilize GPU.

Homework I did:

  1. Parallel Implementation of Gaussian mixture models for working with multiple GPU’s
  2. Estimating mixture of Gaussian models in Pytorch
2 Likes

Any advice is greatly appreciated.

Any luck finding this @oat ?

I came across gmm-torch however it seems to not take advantage of the autograd system of pytorch. I’d also like to find a demonstration for how to implement this in pytorch using a traditional training of parameters with optimizers.